|本期目录/Table of Contents|

[1]丁明涛*,卢昭龙,李振洪,等.S2OGAN:一种纹理与边缘保真的SAR-光学影像生成式翻译方法[J].地球科学与环境学报,2025,47(04):806-828.[doi:10.19814/j.jese.2024.10023]
 DING Ming-tao*,LU Zhao-long,LI Zhen-hong,et al.S2OGAN: A SAR-optical Image Generative Translation Method for Texture and Edge Fidelity[J].Journal of Earth Sciences and Environment,2025,47(04):806-828.[doi:10.19814/j.jese.2024.10023]
点击复制

S2OGAN:一种纹理与边缘保真的SAR-光学影像生成式翻译方法(PDF)
分享到:

《地球科学与环境学报》[ISSN:1672-6561/CN:61-1423/P]

卷:
第47卷
期数:
2025年第04期
页码:
806-828
栏目:
黄河流域生态保护和高质量发展专刊(下)
出版日期:
2025-07-15

文章信息/Info

Title:
S2OGAN: A SAR-optical Image Generative Translation Method for Texture and Edge Fidelity
文章编号:
1672-6561(2025)04-0806-23
作者:
丁明涛12345*卢昭龙13李振洪1234江辉13黄武彪6
(1. 长安大学 地质工程与测绘学院,陕西 西安 710054; 2. 黄土科学全国重点实验室,陕西 西安 710054; 3. 长安大学 地学与卫星大数据研究中心,陕西 西安 710054; 4. 长安大学 自然资源部生态地质与灾害防控重点实验室,陕西 西安 710054; 5. 智慧地球重点实验室,北京 100029; 6. 武汉大学 测绘学院,湖北 武汉 430079)
Author(s):
DING Ming-tao12345*LU Zhao-long13LI Zhen-hong1234JIANG Hui13HUANG Wu-biao6
(1. School of Geological Engineering and Geomatics, Chang' an University, Xi'an 710054, Shaanxi, China; 2. State Key Laboratory of Loess Science, Xi'an 710054, Shaanxi, China; 3. Big Data Center for Geosciences and Satellites, Chang'an University, Xi'an 710054, Shaanxi, China; 4. Key Laboratory of Ecological Geology and Disaster Prevention of Ministry of Natural Resources, Chang'an University, Xi'an 710054, Shaanxi, China; 5. Key Laboratory of Smart Earth, Beijing 100029, China; 6. School of Geodesy and Geomatics, Wuhan University, Wuhan 430079, Hubei, China)
关键词:
S2OGAN 影像翻译 条件生成对抗网络 纹理保真 边缘保真 合成孔径雷达影像 冰川 三江源地区
Keywords:
S2OGAN image translation cGAN texture fidelity edge fidelity SAR image glacier Sanjiangyuan region
分类号:
P237
DOI:
10.19814/j.jese.2024.10023
文献标志码:
A
摘要:
光学遥感影像具备丰富的纹理和光谱信息,已成为高价值的数据资源。然而,光学遥感影像的获取过程易受光照条件与气象因素影响,尤其在多云、多雾或强降水等复杂天气情况下,常出现数据缺失或影像质量下降等问题,在一定程度上限制了其在时空连续性监测和应用效果方面的表现。相比之下,合成孔径雷达(SAR)具备全天时、全天候成像能力,能够有效弥补光学遥感影像在恶劣环境下的获取不足,成为光学遥感的重要补充手段。为了增强遥感监测的连续性和完整性,在条件生成对抗网络(cGAN)框架的基础上,针对SAR影像中常见的散斑噪声问题,提出了一种SAR-光学影像翻译方法——S2OGAN方法。该方法引入去噪卷积神经网络(DnCNN)作为去噪模块,以有效滤除噪声并提高纹理保真度; 同时,结合相位一致性直方图(HOPC)作为边缘损失,进一步强化边缘特征的表达,实现纹理和结构特征的高精度重建; 此外,基于GEE平台构建了冰川观测实验数据集,以探讨S2OGAN方法在冰川场景中的应用效果。结果表明:在4个常用数据集上,S2OGAN方法相较于Pix2Pix、CycleGAN、CUT、Semi-I2I等4个经典影像翻译方法展现出更优的综合性能; 在多空间分辨率下,S2OGAN方法在定量评价指标上表现得比其他4个影像翻译方法更为稳定,充分体现了其较高的鲁棒性; 在简单场景中,各方法的综合性能普遍优于在复杂场景中,其中S2OGAN方法不仅整体表现最佳,并且在复杂场景中的性能保持相对稳定,展现出较高的稳健性; 在冰川场景中,S2OGAN方法翻译得到的光学影像结构相似度(SSIM)达到0.668,有效支撑了冰川冰舌轮廓的定量分析,但其纹理信息与真实光学遥感影像仍存在一定差异。最后,基于翻译影像估算出三江源地区格拉丹东冰川两个冰舌消融面积分别约为0.346 3和0.089 0 km2,为多云雾地区的连续遥感监测提供了新的技术方案。
Abstract:
Optical remote sensing imagery, characterized by its rich texture and spectral information, has become a valuable data resource. However, its acquisition is highly susceptible to variations in illumination and atmospheric conditions. In particular, adverse weather scenarios such as cloud cover, fog, or heavy precipitation often result in data loss or reduced image quality, which constrains its effectiveness in continuous spatiotemporal monitoring and application performance. In comparison, synthetic aperture radar(SAR)offers all-weather, day-and-night imaging capabilities, and can effectively compensate for the limitations of optical remote sensing imagery. To enhance the continuity and completeness of remote sensing monitoring, a SAR-to-optical generative translation method, named as S2OGAN, is proposed based on the conditional generative adversarial network(cGAN)framework, targeting the common issue of scattering noise in SAR images. The S2OGAN incorporates a denoising convolutional neural network(DnCNN)as a dedicated denoising module to effectively suppress noise and improve texture fidelity; additionally, the histogram of phase coherence(HOPC)is introduced as an edge loss to enhance the representation of edge features, thereby achieving more accurate reconstruction of both texture and structural information; to evaluate the effectiveness of S2OGAN in glacier scenarios, a dedicated glacier observation experimental dataset is constructed using the GEE platform. Experimental results on several widely used datasets demonstrate that S2OGAN outperforms the classical image translation methods including Pix2Pix, CycleGAN, CUT and Semi-I2I in terms of overall performance; under varying spatial resolutions, S2OGAN exhibits greater stability than the above four methods across multiple quantitative evaluation metrics, highlighting its superior robustness; in simple scenario, the above five methods tend to perform better than that in complex scenario; however, S2OGAN not only achieves the best overall performance, but also maintains relatively stable results in complex scenario; in glacier observation tasks, the structural similarity index(SSIM)of images translated by S2OGAN reaches 0.668, effectively supporting the quantitative delineation of ice tongue boundaries of glacier; nevertheless, certain differences remain between the translated texture details and those of true-color optical remote sensing images. Finally, based on the translated image, the ablation areas of two ice tongues of Geladandong glacier in Sanjiangyuan region are estimated to be approximately 0.346 3 and 0.089 0 km2, respectively. These results demonstrate the potential of S2OGAN as a new technical approach for continuous remote sensing monitoring in meteorologically challenging regions.

参考文献/References:

[1] 董 岳,丁明涛,李鑫泷,等.基于光学遥感像素偏移量的金沙江流域2018年白格滑坡演变过程[J].地球科学与环境学报,2022,44(6):1002-1015.
DONG Yue,DING Ming-tao,LI Xin-long,et al.Evolution of the 2018 Baige Landslides Revealed by Optical Remote Sensing Pixel Offsets in Jinsha River Basin,China[J].Journal of Earth Sciences and Environment,2022,44(6):1002-1015.
[2] MA Y C,CHEN S,ERMON S,et al.Transfer Learning in Environmental Remote Sensing[J].Remote Sensing of Environment,2024,301:113924.
[3] HAN W,ZHANG X H,WANG Y,et al.A Survey of Machine Learning and Deep Learning in Remote Sen-sing of Geological Environment:Challenges,Advan-ces,and Opportunities[J].ISPRS Journal of Photogrammetry and Remote Sensing,2023,202:87-113.
[4] JIA P Y,CHEN C,ZHANG D L,et al.Semantic Segmentation of Deep Learning Remote Sensing Images Based on Band Combination Principle:Application in Urban Planning and Land Use[J].Computer Communications,2024,217:97-106.
[5] 辛鲁斌,韩 玲,李良志.基于多源数据融合的滑坡智能识别[J].地球科学与环境学报,2023,45(4):920-928.
XIN Lu-bin,HAN Ling,LI Liang-zhi.Landslide Intelligent Recognition Based on Multi-source Data Fusion[J].Journal of Earth Sciences and Environment,2023,45(4):920-928.
[6] 李盛阳,张万峰,杨 松.多源高分辨率遥感影像智能融合[J].遥感学报,2017,21(3):415-424.
LI Sheng-yang,ZHANG Wan-feng,YANG Song.Intelligence Fusion Method Research of Multisource High-resolution Remotesensing Images[J].Journal of Remote Sensing,2017,21(3):415-424.
[7] WANG L,XU X,YU Y,et al.SAR-to-optical Image Translation Using Supervised Cycle-consistent Adver-sarial Networks[J].IEEE Access,2019,7:129136-129149.
[8] WANG Z B,MA Y K,ZHANG Y N.Hybrid cGAN:Coupling Global and Local Features for SAR-to-optical Image Translation[J].IEEE Transactions on Geoscience and Remote Sensing,2022,60:5236016.
[9] ZHAO Y T,CELIK T,LIU N Q,et al.A Comparative Analysis of GAN-based Methods for SAR-to-optical Image Translation[J].IEEE Geoscience and Remote Sensing Letters,2022,19:3512605.
[10] FUENTES REYES M,AUER S,MERKLE N,et al.SAR-to-optical Image Translation Based on Conditio-nal Generative Adversarial Networks:Optimization,Opportunities and Limits[J].Remote Sensing,2019,11(17):2067.
[11] WANG H X,ZHANG Z G,HU Z Y,et al.SAR-to-optical Image Translation with Hierarchical Latent Features[J].IEEE Transactions on Geoscience and Remote Sensing,2022,60:5233812.
[12] 李卫国,蒋 楠,熊世为.基于ARSIS策略的SAR影像与多光谱遥感小波融合[J].农业工程学报,2012,28(增1):158-163.
LI Wei-guo,JIANG Nan,XIONG Shi-wei.Multi-spectral and SAR Wavelet Fusion Based on ARSIS Strategy[J].Transactions of the Chinese Society of Agricultural Engineering,2012,28(S1):158-163.
[13] GAO G,WANG M X,ZHANG X,et al.DEN: A New Method for SAR and Optical Image Fusion and Intelligent Classification[J].IEEE Transactions on Geoscience and Remote Sensing,2025,63:5201118.
[14] YE Y X,ZHANG J C,ZHOU L,et al.Optical and SAR Image Fusion Based on Complementary Feature Decomposition and Visual Saliency Features[J].IEEE Transactions on Geoscience and Remote Sensing,2024,62:5205315.
[15] ZUO Z C,LI Y X.A SAR-to-optical Image Translation Method Based on Pix2Pix[C]∥IEEE.2021 IEEE International Geoscience and Remote Sensing Symposium.Brussels:IEEE,2021:3026-3029.
[16] LUO Q L,LI H,CHEN Z Y,et al.ADD-UNet:An Adjacent Dual-decoder UNet for SAR-to-optical Tra-nslation[J].Remote Sensing,2023,15(12):3125.
[17] 张文元,谈国新,孙传明.一种从SAR影像到光学影像的翻译方法[J].武汉大学学报(信息科学版),2017,42(2):178-184,192.
ZHANG Wen-yuan,TAN Guo-xin,SUN Chuan-ming.An Approach to Translate SAR Image into Optical Image[J].Geomatics and Information Science of Wuhan University,2017,42(2):178-184,192.
[18] 秦 永.SAR与光学影像翻译理论与应用研究[J].测绘学报,2020,49(10):1375-1385.
QIN Yong.Research on Theory and Application of Image Translation Between SAR and Optical Image[J].Acta Geodaetica et Cartographica Sinica,2020,49(10):1375-1385.
[19] 成飞飞,付志涛,黄 亮,等.深度学习在光学和SAR影像融合研究进展[J].遥感学报,2022,26(9):1744-1756.
CHENG Fei-fei,FU Zhi-tao,HUANG Liang,et al.Review of Deep Learning in Optical and SAR Image Fusion[J].National Remote Sensing Bulletin,2022,26(9):1744-1756.
[20] GOODFELLOW I,POUGET-ABADIE J,MIRZA M,et al.Generative Adversarial Nets[J].Advances in Neural Information Processing Systems,2014,27:2672-2680.
[21] ISOLA P,ZHU J Y,ZHOU T H,et al.Image-to-image Translation with Conditional Adversarial Networks[C]∥IEEE.2017 IEEE Conference on Compu-ter Vision and Pattern Recognition.Honolulu:IEEE,2017:5967-5976.
[22] DING X,WANG Y W,XU Z H,et al.Continuous Conditional Generative Adversarial Networks:Novel Empirical Losses and Label Input Mechanisms[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2023,45(7): 8143-8158.
[23] MOHAMMED S S,CLARKE H G.Conditional Image-to-image Translation Generative Adversarial Network(cGAN)for Fabric Defect Data Augmentation[J].Neural Computing and Applications,2024,36:20231-20244.
[24] ZHU J Y,PARK T,ISOLA P,et al.Unpaired Image-to-image Translation Using Cycle-consistent Adversarial Networks[C]∥IEEE.2017 IEEE International Conference on Computer Vision.Venice:IEEE,2017:2242-2251.
[25] PARK T,EFROS A A,ZHANG R,et al.Contrastive Learning for Unpaired Image-to-image Translation[C]∥VEDALDI A,BISCHOF H,BROX T,et al.Computer Vision-ECCV 2020:16th European Confe-rence.Glasgow:ECCV,2020:319-345.
[26] WANG T C,LIU M Y,ZHU J Y,et al.High-resolution Image Synthesis and Semantic Manipulation with Conditional GANs[C]∥IEEE.2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Salt Lake City:IEEE,2018:8798-8807.
[27] DU W L,ZHOU Y,ZHU H C,et al.A Semi-supervised Image-to-image Translation Framework for SAR-optical Image Matching[J].IEEE Geoscience and Remote Sensing Letters,2022,19:4516305.
[28] WEI J,ZOU H X,SUN L,et al.CFRWD-GAN for SAR-to-optical Image Translation[J].Remote Sensing,2023,15(10):2547.
[29] HU X K,ZHANG P Z,BAN Y F,et al.GAN-based SAR and Optical Image Translation for Wildfire Impact Assessment Using Multi-source Remote Sensing Data[J].Remote Sensing of Environment,2023,289:113522.
[30] AMITRANO D.Multitemporal SAR-to-optical Image Translation Using Pix2Pix with Application to Vegetation Monitoring[J].IEEE Access,2024,12:124402-124413.
[31] ZHANG M J,ZHANG P,ZHANG Y H,et al.SAR-to-optical Image Translation via an Interpretable Network[J].Remote Sensing,2024,16(2):242.
[32] GUO Z,ZHANG Z B,CAI Q L,et al.MS-GAN:Lea-rn to Memorize Scene for Unpaired SAR-to-optical Image Translation[J].IEEE Journal of Selected To-pics in Applied Earth Observations and Remote Sensing,2024,17:11467-11484.
[33] ILESANMI A E,ILESANMI T O.Methods for Image Denoising Using Convolutional Neural Network:A Review[J].Complex & Intelligent Systems,2021,7(5):2179-2198.
[34] YE Y X,SHAN J,BRUZZONE L,et al.Robust Re-gistration of Multimodal Remote Sensing Images Ba-sed on Structural Similarity[J].IEEE Transactions on Geoscience and Remote Sensing,2017,55(5):2941-2958.
[35] SCHMITT M,HUGHES L H,ZHU X X.The SEN1-2 Dataset for Deep Learning in SAR-optical Data Fusion[C]∥ISPRS.2018 ISPRS TC Midterm Symposium“Innovative Sensing:From Sensors to Methods and Applications”.Karlsruhe:ISPRS,2018:141-146.
[36] SCHMITT M,HUGHES L H,QIU C,et al.SEN12MS:A Curated Dataset of Georeferenced Multi-spectral Sentinel-1/2 Imagery for Deep Learning and Data Fusion[C]∥ISPRS.2019 PIA19+MRSS19:Photogrammetric Image Analysis & Munich Remote Sensing Sy-mposium.Munich:ISPRS,2019:153-160.
[37] HUANG M Y,XU Y,QIAN L X,et al.The QXS-SAROPT Dataset for Deep Learning in SAR-optical Data Fusion[J].Journal of Latex Class Files,2015,14(8):1-5.
[38] 刘时银,姚晓军,郭万钦,等.基于第二次冰川编目的中国冰川现状[J].地理学报,2015,70(1):3-16.
LIU Shi-yin,YAO Xiao-jun,GUO Wan-qin,et al.The Contemporary Glaciers in China Based on the Second Chinese Glacier Inventory[J].Acta Geographica Sinica,2015,70(1):3-16.
[39] GUO W Q,LIU S Y,XU J L,et al.The Second Chinese Glacier Inventory:Data,Methods and Results[J].Journal of Glaciology,2015,61:357-372.
[40] LEDIG C,THEIS L,HUSZÁR F,et al.Photo-realistic Single Image Super-resolution Using a Generative Adversarial Network[C]∥IEEE.2017 IEEE Confe-rence on Computer Vision and Pattern Recognition.Honolulu:IEEE,2017:105-114.
[41] LUO Y,PI D C.SAR-to-optical Image Translation for Quality Enhancement[J].Journal of Ambient Intelligence and Humanized Computing,2023,14(8):9985-10000.
[42] BISHOP C M.Pattern Recognition and Machine Lear-ning[M].New York:Springer,2006.
[43] ZHANG R,ISOLA P,EFROS A A,et al.The Unreasonable Effectiveness of Deep Features as a Perceptual Metric[C]∥IEEE.2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Salt Lake City:IEEE,2018:586-595.
[44] SIMONYAN K,ZISSERMAN A.Very Deep Convolutional Networks for Large-scale Image Recognition[C]∥DBLP.The 3rd International Conference on Learning Representations.San Diego:DBLP,2015:150-169.
[45] KRIZHEVSKY A,SUTSKEVER I,HINTON G E.ImageNet Classification with Deep Convolutional Ne-ural Networks[J].Communications of the ACM,2017,60(6):84-90.
[46] DE MATTOS S H V L,VICENTE L E,VICENTE A K,et al.Metrics Based on Information Entropy Applied to Evaluate Complexity of Landscape Patterns[J].PLOS One,2022,17(1):e0262680.
[47] TSOKAS A,RYSZ M,PARDALOS P M,et al.SAR Data Applications in Earth Observation:An Overview[J].Expert Systems with Applications,2022,205:117342.
[48] XIANG Y M,WANG F,WAN L,et al.SAR-PC:Ed-ge Detection in SAR Images via an Advanced Phase Congruency Model[J].Remote Sensing,2017,9(3):209.
[49] MORRONE M C,OWENS R A.Feature Detection from Local Energy[J].Pattern Recognition Letters,1987,6:303-313.
[50] 陈 健.基于多源遥感数据的格拉丹东冰川变化监测[D].成都:成都理工大学,2021.
CHEN Jian.Monitoring of Glaciers Change in Gela-dandong Based on Multi-source Remote Sensing[D].Chengdu:Chengdu University of Technology,2021.
[51] 苏家灿.1986~2020年格拉丹东地区冰川变化遥感监测研究[D].昆明:云南师范大学,2023.
SU Jia-can.Study on Remote Sensing Monitoring of Glacier Changes in the Geladandong Region from 1986 to 2020[D].Kunming:Yunnan Normal University,2023.

相似文献/References:

备注/Memo

备注/Memo:
收稿日期:2024-10-15; 修回日期:2025-03-28
基金项目:国家自然科学基金项目(42374027); 国家重点研发计划项目(2021YFC300400); 陕西省科技创新团队项目(2021TD-51); 陕西省地学大数据与地质灾害防治创新团队项目(2022); 智慧地球重点实验室基金项目(KF2023YB04-01); 中央高校基本科研业务费专项资金项目(300102262203,300102262902)
*通信作者:丁明涛(1983-),男,湖北天门人,长安大学教授,理学博士,E-mail:mingtaoding@chd.edu.cn。
更新日期/Last Update: 2025-07-25