Clustering and Projected Clustering with Adaptive Neighbors

derivation

Posted by Kwan on May 8, 2020

脑子不太好使了,写一下记录一下怕忘。

Given a data set $\left\lbrace x_{1}, x_{2}, \ldots, x_{n}\right\rbrace$, we denote $X \in \mathbb{R}^{n \times d}$ as the data matrix. The probabilistic neighbors can be derived by:

\[\min_{s_{i}^{T} 1=1,0 \leq s_{i} \leq 1} \sum_{j=1}^{n}\left(\left\|x_{i}-x_{j}\right\| _ {2}^{2} s_{i j}+\gamma s_{i j}^{2}\right) \tag{1}\label{gs1111}\]

where $s_{ij}$ represents similarity between samples $i$ and $j$, and $\gamma>0$ is a hyperparameter.

Property 1. The optimization problem $\eqref{gs1111}$ can be restated as:

\[\min _ {s_{i}^{T} 1=1,0 \leq s_{i} \leq 1}\left\|s_{i}+\frac{1}{2 \gamma} d_{i}^{x}\right\| _ {2}^{2} \tag{2}\]

where $s_{i} \in \mathbb{R}^{n \times 1}$ is a vector with the $j$-th element as $s_{ij}$ and $d_{i}^{x} \in \mathbb{R}^{n \times 1}$ is a vector with the $j$-th element as $d_{i j}^{x}=\left|x_{i}-x_{j}\right|_{2}^{2}$.

Proof:

\[\begin{aligned}&\hspace{1.4em} \min _ {s_{i}^{T} 1=1,0 \leq s_{i} \leq 1} \sum_{j=1}^{n}\left(\left\|x_{i}-x_{j}\right\|_ {2}^{2} s_{i j}+\gamma s_{i j}^{2}\right) \\ & \Leftrightarrow \min _ {s_{i}^{T} 1=1,0 \leq s_{i} \leq 1} \frac{1}{\gamma}\sum_{j=1}^{n}\left\|x_{i}-x_{j}\right\|_ {2}^{2} s_{i j}+\sum_{j=1}^{n}s_{i j}^{2} \\ &\Leftrightarrow \min _ {s_{i}^{T} 1=1,0 \leq s_{i} \leq 1} \sum_{j=1}^{n}s_{i j}^{2} + \frac{1}{\gamma}(s_{i1}\left\|x_{i}-x_{1}\right\| _ {2}^{2}+\ldots+s_{in}\left\|x_{i}-x_{n}\right\| _ {2}^{2})+Const \\ &\Leftrightarrow \min _ {s_{i}^{T} 1=1,0 \leq s_{i} \leq 1} \sum_{j=1}^{n}s_{i j}^{2} + \frac{1}{2\gamma}(s_{i1}\left\|x_{i}-x_{1}\right\| _ {2}^{2}+\ldots+s_{in}\left\|x_{i}-x_{n}\right\| _ {2}^{2}) + \frac{1}{2\gamma}(s_{i1}\left\|x_{i}-x_{1}\right\| _ {2}^{2}+\ldots+s_{in}\left\|x_{i}-x_{n}\right\| _ {2}^{2})+ \frac{1}{4\gamma^2}(d_i^x)^T d_i^x \\ &\Leftrightarrow \min _ {s_{i}^{T} 1=1,0 \leq s_{i} \leq 1} s_{i}^Ts_i + \frac{1}{2\gamma} (d_i^x)^T s_{ij} + \frac{1}{2\gamma} (d_i^x)^T s_{ij} + \frac{1}{4\gamma^2}(d_i^x)^T d_i^x \\ &\Leftrightarrow \min _ {s_{i}^{T} 1=1,0 \leq s_{i} \leq 1} (s_{i} + \frac{1}{2\gamma} d_i^x)^T(s_{i} + \frac{1}{2\gamma} d_i^x) \\ &\Leftrightarrow \min _ {s_{i}^{T} 1=1,0 \leq s_{i} \leq 1} \left\|(s_{i} + \frac{1}{2\gamma} d_i^x)\right\| _ 2^2 \end{aligned}\]

Property 2. Assume the similarity matrix obtained by $\eqref{gs1111}$ is $S$, each sample is deemed as a node in a graph whose weight matrix is $S$, and each node $i$ is assigned a function value as $f_{i} \in \mathbb{R}^{c \times 1}$, then it can be verified that:

\[\sum_{i, j=1}^{n}\left\|f_{i}-f_{j}\right\| _ {2}^{2} s_{i j}=2 Tr\left(F^{T} L_{S} F\right) \tag{3}\]

where $F \in \mathbb{R}^{n \times c}$ with the $i$-th row formed by $f_i$, $L_S = D_{S}-\frac{S^{T}+S}{2}$ is called Laplacian matrix in graph theory, the degree matrix $D_{S} \in \mathbb{R}^{n \times n}$ is defined as a diagonal matrix where the $i$-th diagonal element is $\sum_{j}\left(s_{i j}+s_{j i}\right) / 2$.

Proof:

Since $F$’s $i$-th row is formed by $f_i$, we can write $F$ as $F=\begin{bmatrix}f_1^1 & f_1^2 & \cdots & f_1^c \\ f_2^1 & f_2^2 & \cdots & f_2^c \\\vdots & \vdots & \ddots & \vdots \\f_n^1 & f_n^2 & \cdots & f_n^c \end{bmatrix}$, where $f_i^j$ stands for the $j$-th element of $f_i$. We rewrite $F$ as a column vector group $F=\begin{bmatrix}q_1 & q_2 & \cdots & q_c \end{bmatrix}$, where $q_c=\begin{bmatrix} f_1^1 \\ f_2^1 \\ \vdots \\ f_n^c \end{bmatrix}$, and hence $F^T = \begin{bmatrix} q_1^T \\ q_2^T \\ \vdots \\ q_c^T \end{bmatrix}$. Besides, we set ${q_k}_ i$ to be the $i$-th element of $q_k$. Then, we have the following:

\[\begin{aligned}& \hspace{1.375em} 2 Tr\left(F^{T} L_{S} F\right) \\& = 2 Tr\left(F^{T} D_{S} F - F^{T} \frac{S^{T}+S}{2} F\right) \\& = Tr\left(F^{T} D_{S} F \right)-Tr\bigg(F^{T} \left( S^{T}+S \right) F\bigg) +Tr\left(F^{T} D_{S} F \right) \\ &= \sum_{k=1}^c q_k^T D_S q_k - \sum_{k=1}^c q_k^T (S^T+S)q_k + \sum_{k=1}^c q_k^T D_S q_k \\ &= \sum_{k=1}^c\sum_{i=1}^n {D_S} _ i {q_k}_ i^2 -\sum_{k=1}^c\left(\sum_{i=1}^n\sum_{j=1}^n {q_k}_ i \left(S^T+S\right)_ {ij}{q_k}_ j \right) + \sum_{k=1}^c\sum_{i=1}^n {D_S}_ i {q_k}_ i^2 \\ &= \sum_{k=1}^c\sum_{i=1}^n \left(\sum_{j=1}^n \frac{s_{i j}+s_{j i}}{2}\right){q_k}_ i^2 -\sum_{k=1}^c\sum_{i=1}^n\sum_{j=1}^n \left(s_{ij}+s_{ji}\right){q_k}_ i {q_k}_ j + \sum_{k=1}^c\sum_{i=1}^n \left(\sum_{j=1}^n \frac{s_{i j}+s_{j i}}{2}\right){q_k}_ i^2 \end{aligned}\]

As for $\sum_{k=1}^c\sum_{i=1}^n \left(\sum_{j=1}^n \frac{s_{i j}+s_{j i}}{2}\right){q_k}_ i^2$, we could commute the placement of $i$ and $j$, and hence, $\sum_{k=1}^c\sum_{i=1}^n \left(\sum_{j=1}^n \frac{s_{i j}+s_{j i}}{2}\right){q_k}_ i^2 = \sum_{k=1}^c\sum_{j=1}^n \left(\sum_{i=1}^n \frac{s_{ji}+s_{ij}}{2}\right){q_k}_ j^2=\sum_{k=1}^c\sum_{i=1}^n \left(\sum_{j=1}^n \frac{s_{i j}+s_{j i}}{2}\right){q_k}_ j^2$, Then, we have:

\[\begin{aligned} & \hspace{1.34em} \\ &= \sum_{k=1}^c\sum_{i=1}^n \sum_{j=1}^n \frac{s_{i j}+s_{j i}}{2}{q_k}_ i^2 -\sum_{k=1}^c\sum_{i=1}^n\sum_{j=1}^n \left(s_{ij}+s_{ji}\right){q_k}_ i {q_k}_ j + \sum_{k=1}^c\sum_{i=1}^n \sum_{j=1}^n \frac{s_{i j}+s_{j i}}{2}{q_k}_ j^2 \\ &= \sum_{k=1}^c\sum_{i=1}^n \sum_{j=1}^n \frac{s_{i j}+s_{j i}}{2}{f_i^k}^2 -\sum_{k=1}^c\sum_{i=1}^n\sum_{j=1}^n \left(s_{ij}+s_{ji}\right){f_i^k}{f_j^k} + \sum_{k=1}^c\sum_{i=1}^n \sum_{j=1}^n \frac{s_{i j}+s_{j i}}{2}{f_j^k}^2 \\& = \sum_{k=1}^c\sum_{i=1}^n \sum_{j=1}^n \frac{s_{i j}}{2}{f_i^k}^2 -\sum_{k=1}^c\sum_{i=1}^n\sum_{j=1}^n s_{ij}{f_i^k}{f_j^k} + \sum_{k=1}^c\sum_{i=1}^n \sum_{j=1}^n \frac{s_{i j}}{2}{f_j^k}^2 \\ & + \sum_{k=1}^c\sum_{i=1}^n \sum_{j=1}^n \frac{s_{j i}}{2}{f_i^k}^2 -\sum_{k=1}^c\sum_{i=1}^n\sum_{j=1}^n s_{ji}{f_i^k}{f_j^k} + \sum_{k=1}^c\sum_{i=1}^n \sum_{j=1}^n \frac{s_{j i}}{2}{f_j^k}^2 \end{aligned}\]

Applying functional displacement again,

\[\begin{aligned} & \hspace{1.34em} \\& = \sum_{k=1}^c\sum_{i=1}^n \sum_{j=1}^n \frac{s_{i j}}{2}{f_i^k}^2 -\sum_{k=1}^c\sum_{i=1}^n\sum_{j=1}^n s_{ij}{f_i^k}{f_j^k} + \sum_{k=1}^c\sum_{i=1}^n \sum_{j=1}^n \frac{s_{i j}}{2}{f_j^k}^2 \\ & + \sum_{k=1}^c\sum_{i=1}^n \sum_{j=1}^n \frac{s_{i j}}{2}{f_j^k}^2 -\sum_{k=1}^c\sum_{i=1}^n\sum_{j=1}^n s_{i j}{f_j^k}{f_i^k} + \sum_{k=1}^c\sum_{i=1}^n \sum_{j=1}^n \frac{s_{i j}}{2}{f_i^k}^2 \\ &= \sum_{k=1}^c\sum_{i=1}^n \sum_{j=1}^n s_{i j}\left({f_i^k}^2 - 2{f_i^k}{f_j^k} + {f_j^k}^2\right) \\& = \sum_{i=1}^n \sum_{j=1}^n s_{i j} \left(\sum_{k=1}^c\left({f_i^k}-{f_j^k}\right)^2\right) \\& = \sum_{i=1}^n \sum_{j=1}^n s_{i j}\left\|f_i-f_j\right\|_2^2 \end{aligned}\]

which completes the proof.