﻿ 卫星影像匹配的深度卷积神经网络方法
 文章快速检索 高级检索

Satellite Image Matching Method Based on Deep Convolution Neural Network
FAN Dazhao , DONG Yang , ZHANG Yongsheng
Institute of Geospatial Information, Information Engineering University, Zhengzhou 450001, China
Foundation support: The National Natural Science Foundation of China (No. 41401534);The Open Fund of State Key Laboratory of Geographic Information Engineering (No. SKLGIE2013-M-3-1)
First author: FAN Dazhao (1973—), male, PhD, professor, majors in digital photogrammetry. E-mail:fdzcehui@163.com
Abstract: This article focuses on the first aspect of the album of deep learning: the deep convolution method.The traditional matching point extraction algorithm usually uses the manually-designed feature descriptor and the shortest distance between them to match as the matching criterion.The matching result is easy to fall into the local extreme value, which causes the missing of the partial matching point.Aiming at this problem, we introduce a two-channel deep convolution neural network based on spatial scale convolution, and performs matching pattern learning between images to realize the satellite image matching based on deep convolution neural network.The experimental results show that the method can extract the richer matching point in the case of heterogeneous, multi-temporal and multi-resolution satellite images, compared with the traditional matching method.And the accuracy of the final matching results can be maintained at above 90%.
Key words: image matching     deep learning     object-oriented     convolution neural network     satellite image

1 影像特征点匹配算法 1.1 卷积神经网络

(1)
 图 1 典型全连接神经网络结构 Fig. 1 Typical fully connected neural network structure

(2)
 图 2 典型卷积神经网络结构 Fig. 2 Typical convolution neural network structure

1.2 基于深度卷积神经网络的影像匹配模型

1.2.1 两通道深度卷积神经网络

 图 3 两通道深度卷积网络 Fig. 3 2 channel deep convolution neural network

1.2.2 优化的两通道深度卷积神经网络

 图 4 神经网络中影像金字塔过程 Fig. 4 Image pyramid process in neural network

 图 5 基于空间尺度的两通道深度卷积网络 Fig. 5 Based-spatial-scale 2 channel deep convolution neural network

1.3 基于深度卷积神经网络的卫星影像匹配

 图 6 基于深度卷积神经网络的卫星影像匹配流程 Fig. 6 Satellite image matching process based on deep convolution neural network

2 试验及结果分析

2.1 两通道深度卷积神经网络匹配试验

 数据集名称 数据来源 训练集大小 测试集大小 GFG 30 m分辨率谷歌卫星影像 50 m分辨率高分四号卫星影像 20万特征点对 50%正确匹配点对 50%错误匹配点对 10万特征点对 50%正确匹配点对 50%错误匹配点对 THD 5 m分辨率天绘卫星三线阵立体影像对 20万特征点对 50%正确匹配点对 50%错误匹配点对 10万特征点对 50%正确匹配点对 50%错误匹配点对

 图 7 试验数据部分示意 Fig. 7 Display of experimental data

(3)
(4)

 图 8 GFG训练集TPR统计 Fig. 8 TPR results of GFG training set

 图 9 GFG测试集ROC曲线统计 Fig. 9 ROC results of GFG test set

 数据集名称 TPR值/(%) FPR95值/(%) THD训练集 99.516 0.024 THD测试集 99.100 0.164

2.2 基于深度卷积神经网络的卫星影像匹配试验

 图 10 匹配对比试验影像数据示意 Fig. 10 Image data for matching contrast experiment

 试验序号 特征检测方法 描述符生成方法 描述符匹配方法 1 SIFT SIFT ANN 2 SURF SURF ANN 3 KAZE KAZE BF 4 AKAZE AKAZE BF 5 ORB ORB BF-Hamming 6 BRISK BRISK BF 7 FAST SIFT ANN 8 AGAST SIFT ANN 9 Harris SIFT ANN 10 Shi-Tomasi SIFT ANN

 试验序号 对比试验方法 谷歌影像提取特征点数 高分影像提取特征点数 传统方法匹配数 传统方法RANSAC后匹配数 传统方法RANSAC后正确率/(%) 基于BSS-2chDCNN方法匹配数 基于BSS-2chDCNN方法RANSAC后匹配数 基于BSS-2chDCNN方法RANSAC后正确率/(%) 1 SIFT 10 905 831 23 5 0.00 320 32 93.75 2 SURF 21 837 6615 140 5 20.00 4017 124 100.00 3 KAZE 2659 608 5 4 50.00 122 14 92.86 4 AKAZE 2293 491 186 5 0.00 118 20 95.00 5 ORB 5000 2042 13 4 25.00 294 100 100.00 6 BRISK 8021 333 436 7 14.29 117 12 100.00 7 FAST+SIFT 65 178 2812 991 14 21.43 1682 65 98.46 8 AGAST+SIFT 71 291 3305 1167 12 50.00 1993 87 98.85 9 Harris+SIFT 1924 961 0 0 0.00 43 26 100.00 10 Shi-Tomasi+SIFT 8781 9115 0 0 0.00 3913 296 100.00

 图 11 SIFT对比试验结果 Fig. 11 Matching results based on SIFT

 图 12 SURF对比试验结果 Fig. 12 Matching results based on SURF

 图 13 KAZE对比试验结果 Fig. 13 Matching results based on KAZE

 图 14 AKAZE对比试验结果 Fig. 14 Matching results based on AKAZE

 图 15 ORB对比试验结果 Fig. 15 Matching results based on ORB

 图 16 BRISK对比试验结果 Fig. 16 Matching results based on BRISK

 图 17 FAST+sift对比试验结果 Fig. 17 Matching results based on FAST+sift

 图 18 Agast+sift对比试验结果 Fig. 18 Matching results based on Agast+sift

 图 19 Harris+sift对比试验结果 Fig. 19 Matching results based on Harris+sift

 图 20 Shi-Tomasi+sift对比试验结果 Fig. 20 Matching results based on Shi-Tomasi+sift

3 结束语

﻿

 [1] 李德仁. 展望大数据时代的地球空间信息学[J]. 测绘学报, 2016, 45(4): 379–384. LI Deren. Towards Geo-spatial Information Science in Big Data Era[J]. Acta Geodaetica et Cartographica Sinica, 2016, 45(4): 379–384. DOI:10.11947/j.AGCS.2016.20160057 [2] 李德仁, 张良培, 夏桂松. 遥感大数据自动分析与数据挖掘[J]. 测绘学报, 2014, 43(12): 1211–1216. LI Deren, ZHANG Liangpei, XIA Guisong. Automatic Analysis and Mining of Remote Sensing Big Data[J]. Acta Geodaetica et Cartographica Sinica, 2014, 43(12): 1211–1216. DOI:10.13485/j.cnki.11-2089.2014.0187 [3] 严国萍, 何俊峰. 高斯-拉普拉斯边缘检测算子的扩展研究[J]. 华中科技大学学报(自然科学版), 2006, 34(10): 21–23. YAN Guoping, HE Junfeng. Extended Laplacian of Gaussian Operator for Edge Detection[J]. Journal of Huazhong University of Science and Technology (Nature Science Edition), 2006, 34(10): 21–23. DOI:10.3321/j.issn:1671-4512.2006.10.007 [4] 赵万金, 龚声蓉, 刘纯平, 等. 一种自适应的Harris角点检测算法[J]. 计算机工程, 2008, 34(10): 212–214, 217. ZHAO Wanjin, GONG Shengrong, LIU Chunping, et al. Adaptive Harris Corner Detection Algorithm[J]. Computer Engineering, 2008, 34(10): 212–214, 217. DOI:10.3969/j.issn.1000-3428.2008.10.077 [5] LOWE D G. Distinctive Image Features from Scale-invariant Keypoints[J]. International Journal of Computer Vision, 2004, 60(2): 91–110. DOI:10.1023/B:VISI.0000029664.99615.94 [6] KE Yan, SUKTHANKAR R. PCA-SIFT: A More Distinctive Representation for Local Image Descriptors[C]//Proceedings of the IEEE Computer Society Computer Vision and Pattern Recognition. Washington, DC: IEEE, 2004: 506-513. http://dl.acm.org/citation.cfm?id=1896374 [7] BAY H, TUYTELAARS T, VAN GOOL L. SURF: Speeded up Robust Features[M]//LEONARDIS A, BISCHOF H, PINZ A. Computer Vision-ECCV 2006. Berlin: Springer, 2006: 404-417. [8] CALONDER M, LEPETIT V, STRECHA C, et al. BRIEF: Binary Robust Independent Elementary Features[M]//DANⅡLIDIS K, MARAGOS P, PARAGIOS N. Computer Vision-ECCV 2010. Berlin: Springer, 2010: 778-792. [9] CALONDER M, LEPETIT V, OZUYSAL M, et al. BRIEF:Computing a Local Binary Descriptor Very Fast[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(7): 1281–1298. DOI:10.1109/TPAMI.2011.222 [10] 许允喜, 陈方. 局部图像描述符最新研究进展[J]. 中国图象图形学报, 2015, 20(9): 1133–1150. XU Yunxi, CHEN Fang. Recent Advances in Local Image Descriptor[J]. Journal of Image and Graphics, 2015, 20(9): 1133–1150. DOI:10.11834/jig.20150901 [11] 董杨, 范大昭, 纪松, 等. 主成分分析的匹配点对提纯方法[J]. 测绘学报, 2017, 46(2): 228–236. DONG Yang, FAN Dazhao, JI Song, et al. The Purification Method of Matching Points Based on Principal Component Analysis[J]. Acta Geodaetica et Cartographica Sinica, 2017, 46(2): 228–236. DOI:10.11947/j.AGCS.2017.20160250 [12] ŽBONTAR J, LECUN Y. Computing the Stereo Matching Cost with a Convolutional Neural Network[C]//IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA: IEEE, 2015: 1592-1599. http://arxiv.org/abs/1409.4326 [13] ZAGORUYKO S, KOMODAKIS N. Learning to Compare Image Patches via Convolutional Neural Networks[C]//IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA: IEEE, 2015: 4353-4361. http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7299064 [14] HAN Xufeng, LEUNG T, JIA Yangqing, et al. MatchNet: Unifying Feature and Metric Learning for Patch-based Matching[C]//IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA: IEEE, 2015: 3279-3286. http://ieeexplore.ieee.org/xpls/icp.jsp?arnumber=7298948 [15] PAL S K, MITRA S. Multilayer Perceptron, Fuzzy Sets, and Classification[J]. IEEE Transactions on Neural Networks, 1992, 3(5): 683–697. DOI:10.1109/72.159058 [16] TRAPPEY A J C, HSU F C, TRAPPEY C V, et al. Development of a Patent Document Classification and Search Platform Using a Back-propagation Network[J]. Expert Systems with Applications, 2006, 31(4): 755–765. DOI:10.1016/j.eswa.2006.01.013 [17] SZEGEDY C, LIU Wei, JIA Yangqing, et al. Going Deeper with Convolutions[C]//IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA: IEEE, 2015: 1-9. http://ieeexplore.ieee.org/xpls/icp.jsp?arnumber=7298594 [18] ŽBONTAR J, LECUN Y. Stereo Matching by Training A Convolutional Neural Network to Compare Image Patches[J]. The Journal of Machine Learning Research, 2016, 17(1): 2287–2318. [19] COLLOBERT R, KAVUKCUOGLU K, FARABET C. Torch7: A Matlab-like Environment for Machine Learning[C]//Neural Information Processing Systems Workshop. [s. l]: BigLearn, 2011. https://www.researchgate.net/publication/319770411_Torch7_A_Matlab-like_Environment_for_Machine_Learning [20] CHEN Tianqi, LI Mu, LI Yutian, et al. MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems[R]. Nanjin: Neural Information Processing Systems, Workshop on Machine Learning Systems, 2015. http://www.oalib.com/paper/4016083 [21] BROWN M, HUA Gang, WINDER S. Discriminative Learning of Local Image Descriptors[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(1): 43–57. DOI:10.1109/TPAMI.2010.54 [22] 杜玉龙, 李建增, 张岩, 等. 基于深度交叉CNN和免交互GrabCut的显著性检测[J]. 计算机工程与应用, 2017, 53(3): 32–40. DU Yulong, LI Jianzeng, ZHANG Yan, et al. Saliency Detection Based on Deep Cross CNN and Non-interaction GrabCut[J]. Computer Engineering and Applications, 2017, 53(3): 32–40. DOI:10.3778/j.issn.1002-8331.1607-0134 [23] ALCANTARILLA P F, BARTOLI A, DAVISON A J. KAZE Features[M]//FITZGIBBON A, LAZEBNIK S, PERONA P, et al. Computer Vision-ECCV 2012. Berlin, Heidelberg: Springer, 2012. [24] ALCANTARILLA P F, NUEVO J, BARTOLI A. Fast Explicit Diffusion for Accelerated Features in Nonlinear Scale Spaces[C]//Proceedings British Machine Vision Conference 2013. Bristol, UK: British Machine Vision Conference, 2013. https://www.researchgate.net/publication/257142102_Fast_Explicit_Diffusion_for_Accelerated_Features_in_Nonlinear_Scale_Spaces [25] RUBLEE E, RABAUD V, KONOLIGE K, et al. ORB: An efficient Alternative to SIFT or SURF[C]//IEEE International Conference on Computer Vision. Barcelona, Spain: IEEE, 2012. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6126544 [26] LEUTENEGGER S, CHLI M, SIEGWART R Y. BRISK: Binary Robust Invariant Scalable Keypoints[C]//IEEE International Conference on Computer Vision. Barcelona, Spain: IEEE, 2012. http://dl.acm.org/citation.cfm?id=2356277 [27] ROSTEN E, DRUMMOND T. Machine Learning for High-Speed Corner Detection[M]//LEONARDIS A, BISCHOF H, PINZ A. Computer Vision-ECCV 2006. Berlin, Heidelberg: Springer, 2006. [28] MAIR E, HAGER G D, BURSCHKA D, et al. Adaptive and Generic Corner Detection Based on the Accelerated Segment Test[M]//DANIILIDIS K, MARAGOS P, PARAGIOS N. Computer Vision-ECCV 2010. Berlin: Springer, 2010. [29] HARRIS C, STEPHENS M. A Combined Corner and Edge Detector[C]//Proceedings of the 4th Alvey Vision Conference. Manchester: Springer, 1988: 147-151. http://www.researchgate.net/publication/215458771_A_combined_corner_and_edge_detector [30] SHI Jianbo, TOMASI. Good Features to Track[C]//IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Seattle, WA: IEEE, 1994.
http://dx.doi.org/10.11947/j.AGCS.2018.20170627

0

#### 文章信息

FAN Dazhao, DONG Yang, ZHANG Yongsheng

Satellite Image Matching Method Based on Deep Convolution Neural Network

Acta Geodaetica et Cartographica Sinica, 2018, 47(6): 844-853
http://dx.doi.org/10.11947/j.AGCS.2018.20170627