Paper Reading: Beyond [CLS] through Ranking by Generation

venlue: EMNLP 2020 (link) Previous work that uses pretrained language model (PLM) such as BERT for information retrieval takes the [CLS] embedding of the concatenation of query and document as features for discriminative learning. In other words, the relevance label for a given (query, document) pair is modeled as: where is the [CLS] embedding from the last layer of BERT and is usually a classification … Continue reading Paper Reading: Beyond [CLS] through Ranking by Generation

Paper Reading: Entities with Quantities: Extraction, Search, and Ranking

venue: WSDM 2020 (Demonstration) demo link: https://qsearch.mpi-inf.mpg.de/ Traditional search engines do no understand quantities and often fail to return results expected to satisfy certain quantity conditions in the query. This paper introduces Qsearch, which is originally proposed in “ Qsearch: Answeringquantity queries from text (ISWC 2019)” . The overview framework is shown in Figure 1 which consists of two phases: Extract and Answer. 1. Extract … Continue reading Paper Reading: Entities with Quantities: Extraction, Search, and Ranking

Paper Reading: Neural IR Meets Graph Embedding: A Ranking Model for Product Search

This paper claims it is the first study on how to use the click-graph features in neural models for retrieval. The graph embedding techniques proposed in this paper can be plugged into other scenarios where graph-structure information is available(ensemble). 1. Baseline First we describe the basic IR model for product search without proposed graph embedding techniques(baseline). As shown in Figure 1, CNN is used to … Continue reading Paper Reading: Neural IR Meets Graph Embedding: A Ranking Model for Product Search