Paper Reading: Neural IR Meets Graph Embedding: A Ranking Model for Product Search

This paper claims it is the first study on how to use the click-graph features in neural models for retrieval. The graph embedding techniques proposed in this paper can be plugged into other scenarios where graph-structure information is available(ensemble).

1. Baseline

First we describe the basic IR model for product search without proposed graph embedding techniques(baseline). As shown in Figure 1, CNN is used to extract semantic query features v_q^{sem} and semantic product features v_p{sem} from product descriptions. The product id embedding v_p is concatenated with v_q^{sem} and v_p{sem}. Finally, an MLP is used to predict the relevance score. In fact, the structure is similar to C-DSSM.

Figure 1

2. Graph Embedding-based Ranking Model

Assume we have a query graph G_{query} = (Q, E_Q) and a product graph G_{product} = (Q, E_P). There could be different ways to construct such graphs(e.g. click-through data). For example, quries clicking the same product can be linked, products appearing in the same order can be linked as shown in Figure 2. Those graphs can prodivde additional information for retrieval tasks.

Figure 2

The paper proposes to replace product id embeddings with product graph embeddings as shown in Figure 3. They concatenate embeddings from DeepWalk with 1st order embeddings from LINE in their framework for graph embeddings.

Figure 3

It is more challenging to use query graph embeddings considering there could be a lot of new queries not in the query graph in the training time. One way is to use a regression model(e.g. CNN or RNN) with query terms as input that “mimics” the query graph embeddings:

argmin_f ||f(t_1^{(q)},...,t_M^{(q)})-e_q||^2

and then use f to recreate query embeddings. However, this method can easily overfit. This paper propose to use an MLP to transform v_q^{sem} into the same vector space as e_q and minimize the reconstruction error as shown in Figure 3:

min. \mathcal{L}_re = ||\hat{e_q} - e_q||^2

st. \hat{e_q} = MLP(v_q^{sem})

This approach can jointly optimize reconstruction error with final ranking loss.

3. Training

Given a query q along with a positive product p_{+} and a negative product p_{-}, the loss is defined as :

\mathcal{L}_{rank}(q,p_{+},p_{-}) = h(score(q,p_{+}),score(q,p_{-}))

where $score$ is the relevance score from model proposed in previous section and h is the smoothed hinge loss. At the same time, the reconstruction error can be minimized. So the overall loss function is:

\mathcal{L} = \sum_{q \in Q}\sum_{p_{+}}\sum_{p_{-}} \mathcal{L}_{rank}(q,p_{+},p_{-})+ \alpha \dot \sum_{q \in Q} \mathcal{L}_{re}(q)

4. Results

Here I summarize some interesting results. First, they find that the performance of baseline method starts to decline for long tail queries while the proposed method remains robust. Second, they also show the baseline tends to rank popular products than propose method.

Figure 4

They also show the proposed model can better capture transitivity and discriminate among those decreasingly relevant query-product pairs (3-hop, 5-hop and so on relatively to totally random).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s