﻿ 基于子空间拟合的块稀疏贝叶斯学习DOA估计
«上一篇
 文章快速检索 高级检索

 应用科技  2020, Vol. 47 Issue (4): 42-46  DOI: 10.11991/yykj.201911007 0

### 引用本文

SHEN Xiangxiang, ZHAO Jianbo. Block sparse bayesian learning DOA estimation based on subspace fitting[J]. Applied Science and Technology, 2020, 47(4): 42-46. DOI: 10.11991/yykj.201911007.

### 文章历史

Block sparse bayesian learning DOA estimation based on subspace fitting
SHEN Xiangxiang, ZHAO Jianbo
College of Information and Communication Engineering, Harbin Engineering University, Harbin 150001, China
Abstract: To improve the performance of traditional direction of arrival estimation algorithm based on sparse bayesian learning under the condition of low SNR, we propose a new off-grid DOA estimation method based on subspace fitting and block sparse Bayesian learning. Firstly, the weighted subspace of the signal is obtained by eigenvalue decomposition of the sample covariance matrix, then the sparse representation model of the equivalent signal is constructed and the parameters are solved by the block sparse Bayesian algorithm. And at the same time, for the modeling error caused by the grid mismatch, the discrete sampling grid points in the spatial domain are treated as dynamic parameters, and by solving a polynomial, the position of discrete grid points is updated iteratively using an expectation maximization algorithm. The simulation results indicate that the proposed method provides better DOA estimation accuracy and spatial resolution than the traditional SBL algorithm.
Keywords: DOA estimation    sparse bayesian learning    subspace fitting    sparse representation    temporal correlation    correlation vector machine    grid mismatch    polynomial root

1 子空间拟合模型

 ${{X}}(t) = {{A}}(\theta ){{S}}(t) + {{N}}(t),t = 1,2, \cdots ,L$

 ${{{R}}_X} = {{E}}[{{X}}(t){{{X}}^{\rm{H}}}(t)] = {{A}}(\theta ){{{R}}_s}{{{A}}^{\rm{H}}}(\theta ) + \sigma _n^2{{I}}$

 ${{{R}}_X} = \sum\limits_{i = 1}^M {{\mu _i}{{{v}}_i}{{v}}_i^{\rm{H}} = {{{U}}_s}{{{\varLambda }}_s}{{U}}_s^{\rm{H}}} + {{{U}}_n}{{{\varLambda }}_n}{{U}}_n^{\rm{H}}$

 ${{{U}}_s} = {{AT}}$

 ${{\hat \theta }} = \arg \mathop {\min }\limits_{{\theta }} \left\| {{{{U}}_s}{{{W}}^{1/2}} - {{AT}}} \right\|_F^2$

 ${{Y}} = {{AT}} + {{E}}$ (1)

2 子空间拟合块稀疏贝叶斯算法 2.1 稀疏模型

 ${{Y}} = {{\varPhi \hat T}} + {{E}}$ (2)

 ${{y}} = {{Ds}} + {{n}}$ (3)

2.2 块稀疏贝叶斯学习

 $p(\left. {{s}} \right|{{y}};{{\gamma }},{{B}},\lambda ) = {\rm{CN}}({{{\mu }}_s},{{{\varSigma }}_s})$

 ${\gamma _i} = {{{\rm{tr}}\left[ {{{{B}}^{ - 1}}({{\varSigma }}_s^i + {{\mu }}_s^i{{({{\mu }}_s^i)}^{\rm{H}}})} \right]}/K},i = 1,2, \cdots ,N$ (4)
 ${{B}} = \left\{ {\sum\limits_{i = 1}^N {{{({{\varSigma }}_s^i + {{\mu }}_s^i{{({{\mu }}_s^i)}^{\rm{H}}})}/{{\gamma _i}}}} } \right\}$ (5)
 $\lambda = \frac{{\left\| {{{y}} - {{D}}{{{\mu }}_s}} \right\|_2^2 + \lambda \left[ {NK - {\rm{tr}}({{{\varSigma }}_s}{{\varSigma }}_0^{ - 1})} \right]}}{{MK}}$ (6)

 ${{\hat T}} = {{\varGamma }}{{{\varPhi }}^{\rm{H}}}{(\lambda {{I}} + {{\varPhi \varGamma }}{{{\varPhi }}^{\rm{H}}})^{ - 1}}{{Y}}$ (7)
 ${{{\varSigma }}_{\hat T}} = {\Bigg({{{\varGamma }}^{ - 1}} + \frac{1}{\lambda }{{{\varPhi }}^{\rm{H}}}{{\varPhi }}\Bigg)^{ - 1}}$ (8)

 ${\gamma _i} = \frac{1}{K}{{{\hat T}}_{i }}{{{B}}^{ - 1}}{{\hat T}}_{i }^{\rm{H}} + {({{{\varSigma }}_{\hat T}})_{ii}}$ (9)
 ${{B}} = \sum\limits_{i = 1}^N {\frac{{{{\hat T}}_{i }^{\rm{H}}{{{{\hat T}}}_{i }}}}{{{\gamma _i}}}} + \eta {{I}}$ (10)
 $\lambda {\rm{ = }}\frac{1}{{MK}}\left\| {{{Y}} - {{\varPhi \hat T}}} \right\|_F^2 + \frac{\lambda }{M}{\rm{tr}}\left[ {{{\varPhi \varGamma }}{{{\varPhi }}^{\rm{H}}}{{(\lambda {{I}} + {{\varPhi \varGamma }}{{{\varPhi }}^{\rm{H}}})}^{ - 1}}} \right]$ (11)

2.3 网格更新

 $\begin{array}{l} {\kern 1pt} {\kern 1pt} {{ E}_{p({{y}}|{{\hat s}},\lambda ,{{\gamma }},{{B}})}}\left\{ {\ln (p({{y}}|{{\hat s}},\lambda ))} \right\} = - ||{{y}} - {{D\mu }}||_2^2 - {\rm{tr}}({{D\varSigma }}{{{D}}^{\rm{H}}}) = \\ - \displaystyle\sum\limits_{k = 1}^K {||{{{Y}}_k} - {{\varPhi }}{{{{\hat T}}}_k}||_2^2} - {\rm{tr}}\left( {{{\varPhi }}{{{\varSigma }}_{\hat T}}{{{\varPhi }}^{\rm{H}}}} \right){\rm{tr}}\left( {{B}} \right) \end{array}\!\!\!\!\!\!\!\!$ (12)

 $\frac{{\partial \displaystyle\sum_k {\left\| {{{{Y}}_k} - {{\varPhi }}{{{{\hat T}}}_k}} \right\|} _2^2}}{{\partial {{{v}}_{{{\hat \theta }_i}}}}} = {({{{{{\alpha}} '}}_i})^{\rm{H}}}\left( {{{{\alpha }}_i}\sum\limits_{k = 1}^K {{{\left| {{{{{\hat T}}}_{ki}}} \right|}^2} - \sum\limits_{k = 1}^K {\left| {{{\hat T}}_{ki}^*} \right|{{{Y}}_{k - i}}} } } \right)$ (13)
 $\frac{{\partial {\rm{tr}}\left( {{{\varPhi }}{{{\varSigma }}_{\hat T}}{{{\varPhi }}^{\rm{H}}}} \right)}}{{\partial {{{v}}_{{{\hat \theta }_i}}}}} = {({{{\alpha '}}_i})^{\rm{H}}}\left( {{\varepsilon _{ii}}{{{\alpha }}_i} + \sum\limits_{j \ne i} {{\varepsilon _{ji}}{{{\alpha }}_j}} } \right)$ (14)

 $\begin{array}{l} {({{{{\alpha '}}}_i})^{\rm{H}}}\left( {{{{\alpha }}_i}\underbrace {\left( {\sum\limits_{k = 1}^K {{{\left| {{{{{\hat T}}}_{ki}}} \right|}^2}} + {\varepsilon _{ii}}{\rm{tr}}\left( {{B}} \right)} \right)}_{ \buildrel \Delta \over = {\phi ^{(i)}}}} + \right.\\ \;\;\;\;\;\;\;\;\left. { \underbrace {{\rm{tr}}\left( {{B}} \right)\sum\limits_{j \ne i} {{\varepsilon _{ji}}{{{\alpha }}_j}} - \sum\limits_{k = 1}^K {\left| {{{\hat T}}_{ki}^*} \right|{{{y}}_{k - i}}} }_{ \buildrel \Delta \over = {\varphi ^{(i)}}}} \right)=0 \end{array}$ (15)

 $\left[ {{v_{{{\hat \theta }_i}}},1,v_{{{\hat \theta }_i}}^{ - 1}, \cdots ,v_{{{\hat \theta }_i}}^{ - (M - 2)}} \right]\left[ {\begin{array}{*{20}{c}} {\dfrac{{M(M - 1)}}{2}{\phi ^{(i)}}} \\ {\varphi _2^{(i)}} \\ {2\varphi _3^{(i)}} \\ \vdots \\ {(M - 1)\varphi _M^{(i)}} \end{array}} \right] = 0$ (16)

 $\hat \theta _i^{{\rm{new}}} = \arcsin \left( { - \frac{\lambda }{{2 {\text{π}} d}} {\rm{angle}}\left( {{z_i}} \right)} \right)$ (17)

2.4 算法步骤

1)对阵列接收数据的协方差矩阵特征分解，构造信号的加权子空间 ${{Y}}$

2)初始化参数 ${{\gamma }}$ ${{B}}$ $\lambda$ 以及 ${{\hat \theta }}$ ，设定收敛条件；

3)迭代；

a)利用式(7)、(8)计算 ${{\hat T}}$ ${{{\varSigma }}_{\hat T}}$

b)利用式(9)、(10)、(11)、(17)更新 ${{\gamma }}$ $B$ $\lambda$ ${{\hat \theta }}$

c)若 ${{||{{{\gamma }}^{i + 1}} - {{{\gamma }}^i}|{|_2}}/{||{{{\gamma }}^i}|{|_2}}} < \tau$ 成立或达到最大迭代次数，则退出迭代；反之，则重复步骤a)、b)，其中 $\tau$ 为收敛判决门限。

4)根据 ${{\gamma }}$ 中极值点的位置，即可得到相应信号的DOA估计。

3 仿真实验与性能分析

3.1 均方根误差

3.2 离散网格间隔

3.3 空间分辨率