Researches

\begin{equation} \hat{X}_{\rm u} = \hat{X}_{\rm in} \circ {M}_{\rm u} = \big[ \mathrm{real}(\mathcal{K}) \circ {M}_{\rm u}, \ \mathrm{imag}(\mathcal{K}) \circ {M}_{\rm u} \big] \in \mathbb{R}^{m \times n \times 2}, \label{eq:hardamard_undersample} \end{equation}

Fig. 1 Proposed 2D probabilistic undersampling layer: Pu is the probability matrix; Mu is the sampling matrix

Fig. 2 Probability matrices and sampling matrices from our 2D probabilistic undersampling layer (no RecNet), trained with the undersampling loss and stable constraints. Sampling rate: (a) 10%; (b) 20%; (c) 30%; (d) 40%

Fig. 3 Curves of the probability matrices from 2D probabilistic undersampling layer: 3D probability curve Pface, central probability curve Pcenter, and marginal probability curve Pmargin. Sampling rate: (a) 10%; (b) 20%; (c) 30%; (d) 40%

\begin{equation} P_\mathrm{center}(t) = \begin{cases} 1, & |t| \leq {t_1}, \\ 1 - \frac{1}{100\sqrt{\mathrm{rate}}} \mathrm{e}^{ \frac{8.1}{ \mathrm{rate} + {2}/{3}} (t - t_1) }, & t_1 < |t| < t_0, \\ \frac{\mathrm{rate}}{\sqrt{2 \pi}}, & t_0 \leq |t| \leq 1 , \end{cases} \label{eq:final_P_center} \end{equation} \begin{equation} P_\mathrm{margin}(t) = \begin{cases} -\frac{\mathrm{rate}}{3 \sqrt{2 \pi} \sigma^{4}} t^2 + \frac{2 \, \mathrm{rate}}{\sqrt{2 \pi} \sigma}, & |t| < t_0, \\ \frac{\mathrm{rate}}{\sqrt{2 \pi}}, & t_0 \leq |t| \leq 1 , \end{cases} \label{eq:final_P_margin} \end{equation} \begin{equation} t = \dfrac{z - 128}{128} \in [-1, 1], \ \ z = 0,1,2,\ldots,256. \end{equation} \begin{equation} P_\mathrm{face}(y,z) = \begin{cases} 1, & |d| \leq t_1, \\ 1 - \frac{1}{100\sqrt{\mathrm{rate}}} \mathrm{e}^{ \frac{8.1}{ \mathrm{rate} + {2}/{3}} \left (d - t_1 \right ) }, & t_1 < |d| < t_0, \\ \frac{\mathrm{rate}}{\sqrt{2 \pi}}, & t_0 \leq |d| \leq \sqrt{2} , \end{cases} \label{eq:final_P_face} \end{equation} \begin{gather} d = \frac{\sqrt{(y-128)^2 + (z-128)^2}}{128} \in [0, \sqrt{2}], \\ \forall \, y = 0, 1, 2, \ldots, 256, \ \ \forall \, z = 0,1, 2, \ldots, 256. \end{gather}

Fig. 4 Our optimal probability matrix and sampling matrix under sampling rate 20%. The 3D view of our undersampling pattern is given, where y, z are two phase encoding directions and x is the frequency encoding direction

[1]   S. Xue, R. Bai, and X. Jin. 2D probabilistic undersampling pattern optimization for MR image reconstruction. arXiv preprint, 2020. arXiv:2003.03797

Fig. 5 Overall structure of our Wavelet-based residual attention network.

Fig. 6 Multi-kernel convolutional layer, channel attention module. and spatial attention module.

[2]   S. Xue, W. Qiu, F. Liu, and X. Jin. Wavelet-based residual attention network for image super-resolution. Neurocomputing, 2020, 382:116-126. DOI 10.1016/j.neucom.2019.11.044

Fig. 7 Structure of our proposed network with one block.

Fig. 8 Overall structure of our proposed improved frequency domain neural network.

[3]   S. Xue, W. Qiu, F. Liu, and X. Jin. Faster super-resolution by improved frequency domain neural networks. Singal, Image and Video Processing, 2020, 14:257–265. DOI 10.1007/s11760-019-01548-8

Fig. 9 Illustration of the t-SVD of an \(n_1 \times n_2 \times n_3\) tensor, i.e., \(\mathcal{A} = \mathcal{U} * \mathcal{S} * \mathcal{V}^{\text{T}}\). Our tensor nuclear norm \(\|\mathcal{A}\|_*\) is defined as the sum of singular values of all frontal slices of the f-diagonal \(\mathcal{S}\), i.e., \( \|\mathcal{A}\|_* \triangleq \text{tr}(\mathcal{S}) = \sum_{i=1}^{n_3} \text{tr}({S}^{(i)}) = \text{tr}(\bar{{S}}^{(1)}) = \|\bar{{A}}^{(1)}\|_*\). Note that our tensor nuclear norm becomes standard matrix nuclear norm when \(n_3 = 1\). Thus, our tensor nuclear norm can be considered as a direct extension from the matrix case to the tensor case.

\begin{equation} \label{eq:tnnr_with_max} \begin{aligned} \min_{\mathcal{X}} \ \ & \|\mathcal{X}\|_* - \max_{\substack{\mathcal{A}_\ell * \mathcal{A}_\ell^\text{T} = \mathcal{I}, \\ \mathcal{B}_\ell * \mathcal{B}_\ell^\text{T} = \mathcal{I}}} \text{tr}(\mathcal{A}_\ell * \mathcal{X} * \mathcal{B}_\ell^\text{T}) \\ \text{s.t.} \, \ \ & \ \mathcal{X}_{{\Omega}} = \mathcal{M}_{{\Omega}} . \end{aligned} \end{equation} [4]   S. Xue, W. Qiu, F. Liu, and X. Jin. Low-rank tensor completion by truncated nuclear norm regularization. 24th International Conference on Pattern Recognition, Beijing, 2018, p.2600-2605. DOI 10.1109/ICPR.2018.8546008

Fig. 10 An example of CPLRR architecture with 3 classes. In a classwise manner, our approach jointly aligns images and learns a nonlinear projective function. It separates out the low-rank components with the domain transformations and projects the original corrupted images to the low-rank representations of the exact categories.

\begin{equation} \begin{aligned} \label{eq:CPLRR} \min_{A,E,\varDelta\tau,W,B} \quad &\sum_{i=1}^{N}||A_i||_* + \lambda ||E_i||_1 \\ \mathrm{s.t.} \, \quad \quad & D_i \circ \tau_i + \sum_{k=1}^{n_i} J_{ik} \varDelta \tau_i \varepsilon_k \varepsilon_k^{\mathrm{T}} = A_i + E_i, \\ & A_i - f(W_iD_i + B_i) = 0, \ \ i = 1,2,\cdots,N, \\ & \! - \sum_{j=1}^{N} ||A_i - f(W_iD_j + B_i)||_\mathrm{F}^2 < \xi,\ j \neq i . \\ \end{aligned} \end{equation} [5]   S. Xue and X. Jin. Robust classwise and projective low-rank representation for image classification. Signal, Image and Video Processing, 2018, 12(1):107-115. DOI 10.1007/s11760-017-1136-1