双判别器深度残差GAN高光谱图像融合
Two-discriminators-deep residual GAN hyperspectral image pan-sharpening
- 2024年29卷第7期 页码:2046-2062
纸质出版日期: 2024-07-16
DOI: 10.11834/jig.220932
移动端阅览
浏览全部资源
扫码关注微信
纸质出版日期: 2024-07-16 ,
移动端阅览
周庆泽, 郭擎, 王海荣, 李安. 2024. 双判别器深度残差GAN高光谱图像融合. 中国图象图形学报, 29(07):2046-2062
Zhou Qingze, Guo Qing, Wang Hairong, Li An. 2024. Two-discriminators-deep residual GAN hyperspectral image pan-sharpening. Journal of Image and Graphics, 29(07):2046-2062
目的
2
为了解决高空间分辨率多光谱图像与高光谱图像融合时的多波段对多波段问题,以及高空间分辨率多光谱图像波谱范围不能完全涵盖高光谱图像波谱范围而导致的光谱失真问题,本文利用深度学习的数据驱动优势,基于高分5号(GF-5)高光谱数据和Sentinel-2多光谱数据,提出一种基于生成对抗网络(generative adversarial network, GAN)的高光谱图像空谱融合方法——双判别器深度残差GAN网络(two discriminator deep residual GAN,2DDRGAN)。
方法
2
考虑待融合图像间的波谱范围关系,采用分组融合策略,利用波段间的相关性,将多对多的融合问题转变为多个一对多的融合问题。使用深度残差模块深度提取图像的光谱和空间特征,用两个判别网络对融合图像的空间和光谱质量分别进行判断,改善生成网络生成的融合图像质量。另外,本文的深度学习网络不需要制作额外的融合结果标签,待融合图像本身就是标签,这大大降低了高光谱融合的工作量,也是目前深度学习遥感图像融合的根本改变。
结果
2
与常用传统空谱融合方法和经典深度学习方法比较的实验结果表明,对于不同地物类型数据,该网络得到的融合结果在提升空间分辨率的同时,有较高的光谱保真度。光谱曲线评价也验证了该网络对于高空间分辨率图像波谱范围以外的高光谱图像波段进行融合时有良好的光谱保真度。
结论
2
本文方法通过深度残差模块提取高光谱图像光谱特征和高空间分辨率图像空间特征,同时引入双判别网络,使得融合结果在保持光谱信息的同时更好地提升空间信息。
Objective
2
Hyperspectral image (HS) pan-sharpening can obtain the fused image with both high spatial resolution and high spectral resolution by using the complementary information between the high spectral resolution HS and the high spatial resolution multi-spectral image (MS). However, HS and MS images both have multi-bands, which is different from the traditional MS pan-sharpening where one band panchromatic image (PAN) with multi-band MS image. This many-to-many band relationship is not convenient for the implementation of pan-sharpening method. Moreover, for those bands where HS exceeds the spectral range of MS, there will be obvious spectral distortion in the fused image due to the lack of strict physical complementary information. In order to solve the above problems, this paper exploits the data-driven advantages of deep learning and proposes the two discriminators deep residual generative adversarial network (2DDRGAN) for hyperspectral image pan-sharpening based on Gaofen-5 (GF-5) HS image and Sentinel-2 MS image.
Method
2
Considering the spectral range relationship between HS and MS, this paper adopts the grouping pan-sharpening strategy to transform the many-to-many problem into multiple one-to-many problems by using the correlation between bands of HS and MS. For the HS band within the spectral range of the high spatial resolution MS image, the band of HS image is directly assigned to the group corresponding to the high spatial resolution MS image. For the HS bands outside the spectral range, the correlation coefficient is used as the band grouping standard to group the bands of HS and MS. It solves the problem that two multi-band images are not easy to be fused directly, and indirectly improves the spectral fidelity of the fused image. The proposed 2DDRGAN consists of one generation network and two discrimination networks. The generation network mainly extracts the deep spectral and spatial features by establishing the deep residual module. Two discrimination networks judge the spatial quality and the spectral quality of the fused image, respectively, to improve the quality of the output fused image from the generation network. The main task of the spatial discrimination network is to compare the results of the generated network with the high spatial resolution MS image, so as to ensure that the generated fused images have high spatial resolution. The main task of spectral discrimination network is to compare the results of the generated network with the HS images, to ensure the generated fused images have high spectral resolution. Moreover, the deep learning pan-sharpening methods do not have real high spatial resolution and high spectral resolution images as the fusion result labels. Most of them currently are based on the simulated data made by Wald protocol. In this paper, the deep learning network does not need to create additional fusion result labels. The images to be fused themselves are labels, which greatly reduces the workload of hyperspectral fusion with a huge amount of data, and is also a fundamental change in the current deep learning fusion.
Result
2
The fusion results of the proposed 2DDRGAN method in different scenarios are compared with traditional methods and existing deep learning methods. The experimental results show that the 2DDRGAN method has high spectral fidelity while improving the spatial resolution. The spectral curve evaluation also verifies shows that the 2DDRGAN network has good spectral fidelity for hyperspectral image bands beyond the spectral range of high spatial resolution images.
Conclusion
2
This method extracts the spectral features of hyperspectral images and the spatial features of high spatial resolution images through the depth residual module, and introduces the double discrimination network, so that the fusion results can better improve the spatial information while maintaining the spectral information.
高光谱图像空谱融合高空间分辨率多光谱图像光谱失真融合策略生成对抗网络(GAN)光谱曲线评价
spatial-spectral fusion of hyperspectral imageshigh spatial resolution multispectral imagespectral distortionfusion strategygenerative adversarial network (GAN)spectral curve evaluation
Cui Y R, He B B, Zhang Y and Li M. 2015. Fusion of hyperspectral and multispectral data using nonnegative matrix factorization. Remote Sensing Technology and Application, 30(1): 82-91
崔艳荣, 何彬彬, 张瑛, 李蔓. 2015. 非负矩阵分解融合高光谱和多光谱数据. 遥感技术与应用, 30(1): 82-91 [DOI: 10.11873/j.issn.1004-0323.2015.1.0082http://dx.doi.org/10.11873/j.issn.1004-0323.2015.1.0082]
Dalla Mura M, Vivone G, Restaino R, Addesso P and Chanussot J. 2015. Global and local Gram-Schmidt methods for hyperspectral pansharpening//2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS). Milan, Italy: IEEE: 37-40 [DOI: 10.1109/IGARSS.2015.7325691http://dx.doi.org/10.1109/IGARSS.2015.7325691]
Dong W S, Fu F Z, Shi G M, Cao X, Wu J J, Li G Y and Li X. 2016. Hyperspectral image super-resolution via non-negative structured sparse representation. IEEE Transactions on Image Processing, 25(5): 2337-2352 [DOI: 10.1109/TIP.2016.2542360http://dx.doi.org/10.1109/TIP.2016.2542360]
Fang S and Xu M. 2023. Hyperspectral and multispectral image fusion focused on error compensation. Journal of Image and Graphics, 28(1): 277-289
方帅, 许漫. 2023. 面向误差补偿的高光谱与多光谱图像融合. 中国图象图形学报, 28(1): 277-289 [DOI: 10.11834/jig.220568http://dx.doi.org/10.11834/jig.220568]
Goodfellow I J, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A and Bengio Y. 2014. Generative adversarial nets//Proceedings of the 27th International Conference on Neural Information Processing Systems. Montreal, Canada: MIT Press: 2672-2680 [DOI:10.48550/arxiv.1406.2661http://dx.doi.org/10.48550/arxiv.1406.2661]
He K M and Sun J. 2015. Convolutional neural networks at constrained time cost//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE: 5353-5360 [DOI: 10.1109/CVPR.2015.7299173http://dx.doi.org/10.1109/CVPR.2015.7299173]
Hoersch B. 2017. Sentinel-2 mission status//Proceedings of the 19th EGU General Assembly Conference. Vienna, Austria: EGU
Li S T, Dian R W, Fang L Y and Bioucas-Dias J M. 2018. Fusing hyperspectral and multispectral images via coupled sparse tensor factorization. IEEE Transactions on Image Processing, 27(8): 4118-4130 [DOI: 10.1109/TIP.2018.2836307http://dx.doi.org/10.1109/TIP.2018.2836307]
Liu N, Li L, Li W, Tao R, Fowler J E and Chanussot J. 2021. Hyperspectral restoration and fusion with multispectral imagery via low-rank tensor-approximation. IEEE Transactions on Geoscience and Remote Sensing, 59(9): 7817-7830 [DOI: 10.1109/TGRS.2020.3049014http://dx.doi.org/10.1109/TGRS.2020.3049014]
Liu Y, Xu H P, Yi H, Shi Q P, Xia W Q and Kang J. 2016. Hyperspectral and multi-spectral data fusion based on constraint CNMF. Journal of Applied Sciences, 34(6): 651-660
刘洋, 徐洪平, 易航, 施清平, 夏伟强, 康健. 2016. 基于有约束CNMF的遥感高光谱多光谱图像融合. 应用科学学报, 34(6): 651-660 [DOI: 10.3969/j.issn.0255-8297.2016.06.001http://dx.doi.org/10.3969/j.issn.0255-8297.2016.06.001]
Liu Y N, Sun D X, Hu X N, Liu S F, Cao K Q, Chai M Y, Liao Q J, Zuo Z Q, Hao Z Y, Duan W B, Zhou W Y N, Zhang J and Zhang Y. 2020. Development of visible and short-wave infrared hyperspectral imager onboard GF-5 satellite. Journal of Remote Sensing, 24(4): 333-344
刘银年, 孙德新, 胡晓宁, 刘书锋, 曹开钦, 柴孟阳, 廖清君, 左志强, 郝振贻, 段微波, 周魏乙诺, 张静, 张营. 2020. 高分五号可见短波红外高光谱相机设计与研制. 遥感学报, 24(4): 333-344 [DOI: 10.11834/jrs.20209196http://dx.doi.org/10.11834/jrs.20209196]
Ma J Y, Yu W, Chen C, Liang P W, Guo X J and Jiang J J. 2020. Pan-GAN: an unsupervised pan-sharpening method for remote sensing image fusion. Information Fusion, 62: 110-120 [DOI: 10.1016/j.inffus.2020.04.006http://dx.doi.org/10.1016/j.inffus.2020.04.006]
Mallat S G. 2006. A theory for multiresolution signal decomposition: the wavelet representation//Heil C and Walnut D F, eds. Fundamental Papers in Wavelet Theory. Princeton, USA: Princeton University Press [DOI: 10.1515/9781400827268.494http://dx.doi.org/10.1515/9781400827268.494]
Masi G, Cozzolino D, Verdoliva L and Scarpa G. 2016. Pansharpening by convolutional neural networks. Remote Sensing, 8(7): #594 [DOI: 10.3390/rs8070594http://dx.doi.org/10.3390/rs8070594]
Mirza M and Osindero S. 2014. Conditional generative adversarial nets [EB/OL]. [2023-09-29]. https://arxiv.org/pdf/1411.1784.pdfhttps://arxiv.org/pdf/1411.1784.pdf
Nie J T, Zhang L, Wei W, Yan Q S, Ding C, Chen G C and Zhang Y N. 2023. A survey of hyperspectral image super-resolution method. Journal of Image and Graphics, 28(6): 1685-1697
聂江涛, 张磊, 魏巍, 闫庆森, 丁晨, 陈国超, 张艳宁. 2023. 高光谱图像超分辨率重建技术研究进展. 中国图象图形学报, 28(6): 1685-1697 [DOI: 10.11834/jig.230038http://dx.doi.org/10.11834/jig.230038]
Radford A, Metz L and Chintala S. 2016. Unsupervised representation learning with deep convolutional generative adversarial networks//Proceedings of the 4th International Conference on Learning Representations. San Juan, Puerto Rico: [s.n.]
Shah V P, Younan N H and King R L. 2008. An efficient pan-sharpening method via a combined adaptive PCA approach and contourlets. IEEE Transactions on Geoscience and Remote Sensing, 46(5): 1323-1335 [DOI: 10.1109/TGRS.2008.916211http://dx.doi.org/10.1109/TGRS.2008.916211]
Simonyan K and Zisserman A. 2015. Very deep convolutional networks for large-scale image recognition//Proceedings of the 3rd International Conference on Learning Representations. San Diego, USA: [s.n.]
Sun Y Z, Jiang G W, Li Y D, Yang Y, Dai H S, He J, Ye Q H, Cao Q, Dong C Z, Zhao S H and Wang W H. 2018. GF-5 satellite: overview and application prospects. Spacecraft Recovery & Remote Sensing, 39(3): 1-13
孙允珠, 蒋光伟, 李云端, 杨勇, 代海山, 何军, 叶擎昊, 曹琼, 董长哲, 赵少华, 王维和. 2018. “高分五号”卫星概况及应用前景展望. 航天返回与遥感, 39(3): 1-13 [DOI: 10.3969/j.issn.1009-8518.2018.03.001http://dx.doi.org/10.3969/j.issn.1009-8518.2018.03.001]
Tong Q X, Zhang B and Zhang L F. 2016. Current progress of hyperspectral remote sensing in China. Journal of Remote Sensing, 20(5): 689-707
童庆禧, 张兵, 张立福. 2016. 中国高光谱遥感的前沿进展. 遥感学报, 20(5): 689-707 [DOI: 10.11834/jrs.20166264http://dx.doi.org/10.11834/jrs.20166264]
Vivone G, Addesso P, Restaino R, Mura M D and Chanussot J. 2019. Pansharpening based on deconvolution for multiband filter estimation. IEEE Transactions on Geoscience and Remote Sensing, 57(1): 540-553 [DOI: 10.1109/TGRS.2018.2858288http://dx.doi.org/10.1109/TGRS.2018.2858288]
Vivone G, Restaino R, Licciardi G, Dalla Mura M and Chanussot J. 2014. MultiResolution analysis and component substitution techniques for hyperspectral pansharpening//2014 IEEE Geoscience and Remote Sensing Symposium. Quebec City, Canada: IEEE: 2649-2652 [DOI: 10.1109/IGARSS.2014.6947018http://dx.doi.org/10.1109/IGARSS.2014.6947018]
Wang M, He L X, Cheng Y F and Chang X L. 2018. Panchromatic and multi-spectral fusion method combined with adaptive Gaussian filter and SFIM model. Acta Geodaetica et Cartographica Sinica, 47(1): 82-90
王密, 何鲁晓, 程宇峰, 常学立. 2018. 自适应高斯滤波与SFIM模型相结合的全色多光谱影像融合方法. 测绘学报, 47(1): 82-90 [ DOI: 10.11947/j.AGCS.2018.20170421]
Wei Q, Bioucas-Dias J, Dobigeon N and Tourneret J Y. 2015. Hyperspectral and multispectral image fusion based on a sparse representation. IEEE Transactions on Geoscience and Remote Sensing, 53(7): 3658-3668 [DOI: 10.1109/TGRS.2014.2381272http://dx.doi.org/10.1109/TGRS.2014.2381272]
Wei Q, Dobigeon N and Tourneret J Y. 2014. Bayesian fusion of hyperspectral and multispectral images//Proceedings of 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Florence, Italy: IEEE: 3176-3180 [DOI: 10.1109/icassp.2014.6854186http://dx.doi.org/10.1109/icassp.2014.6854186]
Xiao J J, Li J, Yuan Q Q, Jiang M H and Zhang L P. 2021. Physics-based GAN with iterative refinement unit for hyperspectral and multispectral image fusion. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 14: 6827-6841 [DOI: 10.1109/JSTARS.2021.3075727http://dx.doi.org/10.1109/JSTARS.2021.3075727]
Xie W Y, Lei J, Cui Y H, Li Y S and Du Q. 2020. Hyperspectral pansharpening with deep priors. IEEE Transactions on Neural Networks and Learning Systems, 31(5): 1529-1543 [DOI: 10.1109/TNNLS.2019.2920857http://dx.doi.org/10.1109/TNNLS.2019.2920857]
Yu D, Li K, Zhang W, Li D D, Tian X and Jiang H. 2023. Deep network-interpreted multispectral image fusion in remote sensing. Journal of Image and Graphics, 28(1): 290-304
余典, 李坤, 张玮, 李对对, 田昕, 江昊. 2023. 可解译深度网络的多光谱遥感图像融合. 中国图象图形学报, 28(1): 290-304 [DOI: 10.11834/jig.220575http://dx.doi.org/10.11834/jig.220575]
Zhang K, Wang M and Yang S Y. 2017. Multispectral and hyperspectral image fusion based on group spectral embedding and low-rank factorization. IEEE Transactions on Geoscience and Remote Sensing, 55(3): 1363-1371 [DOI: 10.1109/TGRS.2016.2623626http://dx.doi.org/10.1109/TGRS.2016.2623626]
Zhang Z, Shi Z W and An Z Y. 2013. Hyperspectral and panchromatic image fusion using unmixing-based constrained nonnegative matrix factorization. Optik, 124(13): 1601-1608 [DOI: 10.1016/j.ijleo.2012.04.022http://dx.doi.org/10.1016/j.ijleo.2012.04.022]
相关作者
相关机构