The basic notations for GCN are the same with this post.
1. Graph Tensor Definition
Here we first describe the formal definition of graph tensor which consists of a series of graphs.
is a graph tensor, where
and
, if
and
(when
).
Where is the i-th graph in the graph tensor,
is the set of i-th graph nodes,
is the set of the i-th graph edges.
is the i-th adjacency matrix. For convinience, all adjacency matrices are packed into a graph adjacency tensor
. Similarly, the graph feature matrices are also packed into a graph feature tensor
, where
is the feature matrix of i-th graph.
2. Text Graph Tensor Construction
To construct a graph for a set of documents, words and documents are treated as nodes. Besides, two types of edges are considered: word-document edges and word-word edges. For word-document edges, TF-IDF method is used to calcualte the edge weights. Three types of word-word edges(semantic/syntactic/sequential) are considered to construct three text graphs which are described in the following:
- Semantic-based graph
A LSTM is trained on the training data. For every sentence/document, word embeddings from the LSTM is used to calculate the cosine similarity between words. If the similarity exceeds a predefined threshold, then we say the two words have a semantic relationship in the current sentence/document. The edge weight between two words is defined as:
where denotes the number of times two words have semantic relationship over all sentences/documents, and
denotes the number of times two words exist in the same sentence/document.
- Syntactic-based graph
For every sentence/document, Stanford CoreNLP is used to extract the dependency between two words. And the edge weight between two words in sytactic-based graph is defined as:
where denotes the the number of times two words have a syntactic dependency over all sentences/documents.
- Sequential-based graph
This one is also used in Graph Convolutional Networks for Text Classification which depicts the local co-occurence(PMI) between words. The edge weight between two words in the sequential-based graph is defined as:
where is the probability of the two words co-occuring in the same sliding window which is estimated by
.
is the number of times two words to-occurs in the same sliding windows over the corpus.
is the total number of sliding windows.
is the probability of the word is occuring in a given window over the corpus.
3. Graph Tensor Learning
After obtaining the constructed graph tensor, then we can esploit learning frameworks to perform GCN on the graph tensor.
- merge edges + GCN
Since a graph tensor consists of multiple graph, a simple strategy is to merge the edges into one graph by pooling the adjacency tensor such as max pooling or mean pooling. However, some experiments show it does not work. Another way is to merge graph adjacency matrix by edge-wise attention:
, where $W_{edge}^i$ is the attention matrix with the same size of adjacency matrix.
- TensorGCN
Instead of merging all graphs, TensorGCN performs two types of propagation learning: intra-graph propagation and inter-graph propagation. The overall procedure is shown in Figure 1. For example, the l-th layer of TensorGCN can be described as :

Intra-graph propagation aggregates information from neighbors of each node within a graph which is almost equal of to the vanilla GCN, but performs a tensor version. The feature of i-th graph in the l-th layer is updated as:
where is the normalized symmetric graph adjacency tensor and
,
and
is the node degree tensor.
Inter-graph propagation propagates/exchanges information between different graphs in the tensor. To achieve this, virtual graphs are constructed by connecting with nodes cross the graphs as showin in Figure 1. Since every graph has the same set of nodes, n nodes result in n virtual graphs and a new graph adjacency tensor . The inter-graph information propagation is defined as:
where is the output of inter-graph propagatioon and the input feature tensor of the l+1-th layer in TensorGCN. There is neither symmetric normalization nor self-connections added.
In the last layer of TensorGCN, after completing inter-graph propataion, the representation of documents nodes are obtained by a mean pooling over graphs for classification.