|Table of Contents|

S2OGAN: A SAR-optical Image Generative Translation Method for Texture and Edge Fidelity(PDF)

《地球科学与环境学报》[ISSN:1672-6561/CN:61-1423/P]

Issue:
2025年第04期
Page:
806-828
Research Field:
黄河流域生态保护和高质量发展专刊(下)
Publishing date:

Info

Title:
S2OGAN: A SAR-optical Image Generative Translation Method for Texture and Edge Fidelity
Author(s):
DING Ming-tao12345*LU Zhao-long13LI Zhen-hong1234JIANG Hui13HUANG Wu-biao6
(1. School of Geological Engineering and Geomatics, Chang' an University, Xi'an 710054, Shaanxi, China; 2. State Key Laboratory of Loess Science, Xi'an 710054, Shaanxi, China; 3. Big Data Center for Geosciences and Satellites, Chang'an University, Xi'an 710054, Shaanxi, China; 4. Key Laboratory of Ecological Geology and Disaster Prevention of Ministry of Natural Resources, Chang'an University, Xi'an 710054, Shaanxi, China; 5. Key Laboratory of Smart Earth, Beijing 100029, China; 6. School of Geodesy and Geomatics, Wuhan University, Wuhan 430079, Hubei, China)
Keywords:
S2OGAN image translation cGAN texture fidelity edge fidelity SAR image glacier Sanjiangyuan region
PACS:
P237
DOI:
10.19814/j.jese.2024.10023
Abstract:
Optical remote sensing imagery, characterized by its rich texture and spectral information, has become a valuable data resource. However, its acquisition is highly susceptible to variations in illumination and atmospheric conditions. In particular, adverse weather scenarios such as cloud cover, fog, or heavy precipitation often result in data loss or reduced image quality, which constrains its effectiveness in continuous spatiotemporal monitoring and application performance. In comparison, synthetic aperture radar(SAR)offers all-weather, day-and-night imaging capabilities, and can effectively compensate for the limitations of optical remote sensing imagery. To enhance the continuity and completeness of remote sensing monitoring, a SAR-to-optical generative translation method, named as S2OGAN, is proposed based on the conditional generative adversarial network(cGAN)framework, targeting the common issue of scattering noise in SAR images. The S2OGAN incorporates a denoising convolutional neural network(DnCNN)as a dedicated denoising module to effectively suppress noise and improve texture fidelity; additionally, the histogram of phase coherence(HOPC)is introduced as an edge loss to enhance the representation of edge features, thereby achieving more accurate reconstruction of both texture and structural information; to evaluate the effectiveness of S2OGAN in glacier scenarios, a dedicated glacier observation experimental dataset is constructed using the GEE platform. Experimental results on several widely used datasets demonstrate that S2OGAN outperforms the classical image translation methods including Pix2Pix, CycleGAN, CUT and Semi-I2I in terms of overall performance; under varying spatial resolutions, S2OGAN exhibits greater stability than the above four methods across multiple quantitative evaluation metrics, highlighting its superior robustness; in simple scenario, the above five methods tend to perform better than that in complex scenario; however, S2OGAN not only achieves the best overall performance, but also maintains relatively stable results in complex scenario; in glacier observation tasks, the structural similarity index(SSIM)of images translated by S2OGAN reaches 0.668, effectively supporting the quantitative delineation of ice tongue boundaries of glacier; nevertheless, certain differences remain between the translated texture details and those of true-color optical remote sensing images. Finally, based on the translated image, the ablation areas of two ice tongues of Geladandong glacier in Sanjiangyuan region are estimated to be approximately 0.346 3 and 0.089 0 km2, respectively. These results demonstrate the potential of S2OGAN as a new technical approach for continuous remote sensing monitoring in meteorologically challenging regions.

References:

[1] 董 岳,丁明涛,李鑫泷,等.基于光学遥感像素偏移量的金沙江流域2018年白格滑坡演变过程[J].地球科学与环境学报,2022,44(6):1002-1015.
DONG Yue,DING Ming-tao,LI Xin-long,et al.Evolution of the 2018 Baige Landslides Revealed by Optical Remote Sensing Pixel Offsets in Jinsha River Basin,China[J].Journal of Earth Sciences and Environment,2022,44(6):1002-1015.
[2] MA Y C,CHEN S,ERMON S,et al.Transfer Learning in Environmental Remote Sensing[J].Remote Sensing of Environment,2024,301:113924.
[3] HAN W,ZHANG X H,WANG Y,et al.A Survey of Machine Learning and Deep Learning in Remote Sen-sing of Geological Environment:Challenges,Advan-ces,and Opportunities[J].ISPRS Journal of Photogrammetry and Remote Sensing,2023,202:87-113.
[4] JIA P Y,CHEN C,ZHANG D L,et al.Semantic Segmentation of Deep Learning Remote Sensing Images Based on Band Combination Principle:Application in Urban Planning and Land Use[J].Computer Communications,2024,217:97-106.
[5] 辛鲁斌,韩 玲,李良志.基于多源数据融合的滑坡智能识别[J].地球科学与环境学报,2023,45(4):920-928.
XIN Lu-bin,HAN Ling,LI Liang-zhi.Landslide Intelligent Recognition Based on Multi-source Data Fusion[J].Journal of Earth Sciences and Environment,2023,45(4):920-928.
[6] 李盛阳,张万峰,杨 松.多源高分辨率遥感影像智能融合[J].遥感学报,2017,21(3):415-424.
LI Sheng-yang,ZHANG Wan-feng,YANG Song.Intelligence Fusion Method Research of Multisource High-resolution Remotesensing Images[J].Journal of Remote Sensing,2017,21(3):415-424.
[7] WANG L,XU X,YU Y,et al.SAR-to-optical Image Translation Using Supervised Cycle-consistent Adver-sarial Networks[J].IEEE Access,2019,7:129136-129149.
[8] WANG Z B,MA Y K,ZHANG Y N.Hybrid cGAN:Coupling Global and Local Features for SAR-to-optical Image Translation[J].IEEE Transactions on Geoscience and Remote Sensing,2022,60:5236016.
[9] ZHAO Y T,CELIK T,LIU N Q,et al.A Comparative Analysis of GAN-based Methods for SAR-to-optical Image Translation[J].IEEE Geoscience and Remote Sensing Letters,2022,19:3512605.
[10] FUENTES REYES M,AUER S,MERKLE N,et al.SAR-to-optical Image Translation Based on Conditio-nal Generative Adversarial Networks:Optimization,Opportunities and Limits[J].Remote Sensing,2019,11(17):2067.
[11] WANG H X,ZHANG Z G,HU Z Y,et al.SAR-to-optical Image Translation with Hierarchical Latent Features[J].IEEE Transactions on Geoscience and Remote Sensing,2022,60:5233812.
[12] 李卫国,蒋 楠,熊世为.基于ARSIS策略的SAR影像与多光谱遥感小波融合[J].农业工程学报,2012,28(增1):158-163.
LI Wei-guo,JIANG Nan,XIONG Shi-wei.Multi-spectral and SAR Wavelet Fusion Based on ARSIS Strategy[J].Transactions of the Chinese Society of Agricultural Engineering,2012,28(S1):158-163.
[13] GAO G,WANG M X,ZHANG X,et al.DEN: A New Method for SAR and Optical Image Fusion and Intelligent Classification[J].IEEE Transactions on Geoscience and Remote Sensing,2025,63:5201118.
[14] YE Y X,ZHANG J C,ZHOU L,et al.Optical and SAR Image Fusion Based on Complementary Feature Decomposition and Visual Saliency Features[J].IEEE Transactions on Geoscience and Remote Sensing,2024,62:5205315.
[15] ZUO Z C,LI Y X.A SAR-to-optical Image Translation Method Based on Pix2Pix[C]∥IEEE.2021 IEEE International Geoscience and Remote Sensing Symposium.Brussels:IEEE,2021:3026-3029.
[16] LUO Q L,LI H,CHEN Z Y,et al.ADD-UNet:An Adjacent Dual-decoder UNet for SAR-to-optical Tra-nslation[J].Remote Sensing,2023,15(12):3125.
[17] 张文元,谈国新,孙传明.一种从SAR影像到光学影像的翻译方法[J].武汉大学学报(信息科学版),2017,42(2):178-184,192.
ZHANG Wen-yuan,TAN Guo-xin,SUN Chuan-ming.An Approach to Translate SAR Image into Optical Image[J].Geomatics and Information Science of Wuhan University,2017,42(2):178-184,192.
[18] 秦 永.SAR与光学影像翻译理论与应用研究[J].测绘学报,2020,49(10):1375-1385.
QIN Yong.Research on Theory and Application of Image Translation Between SAR and Optical Image[J].Acta Geodaetica et Cartographica Sinica,2020,49(10):1375-1385.
[19] 成飞飞,付志涛,黄 亮,等.深度学习在光学和SAR影像融合研究进展[J].遥感学报,2022,26(9):1744-1756.
CHENG Fei-fei,FU Zhi-tao,HUANG Liang,et al.Review of Deep Learning in Optical and SAR Image Fusion[J].National Remote Sensing Bulletin,2022,26(9):1744-1756.
[20] GOODFELLOW I,POUGET-ABADIE J,MIRZA M,et al.Generative Adversarial Nets[J].Advances in Neural Information Processing Systems,2014,27:2672-2680.
[21] ISOLA P,ZHU J Y,ZHOU T H,et al.Image-to-image Translation with Conditional Adversarial Networks[C]∥IEEE.2017 IEEE Conference on Compu-ter Vision and Pattern Recognition.Honolulu:IEEE,2017:5967-5976.
[22] DING X,WANG Y W,XU Z H,et al.Continuous Conditional Generative Adversarial Networks:Novel Empirical Losses and Label Input Mechanisms[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2023,45(7): 8143-8158.
[23] MOHAMMED S S,CLARKE H G.Conditional Image-to-image Translation Generative Adversarial Network(cGAN)for Fabric Defect Data Augmentation[J].Neural Computing and Applications,2024,36:20231-20244.
[24] ZHU J Y,PARK T,ISOLA P,et al.Unpaired Image-to-image Translation Using Cycle-consistent Adversarial Networks[C]∥IEEE.2017 IEEE International Conference on Computer Vision.Venice:IEEE,2017:2242-2251.
[25] PARK T,EFROS A A,ZHANG R,et al.Contrastive Learning for Unpaired Image-to-image Translation[C]∥VEDALDI A,BISCHOF H,BROX T,et al.Computer Vision-ECCV 2020:16th European Confe-rence.Glasgow:ECCV,2020:319-345.
[26] WANG T C,LIU M Y,ZHU J Y,et al.High-resolution Image Synthesis and Semantic Manipulation with Conditional GANs[C]∥IEEE.2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Salt Lake City:IEEE,2018:8798-8807.
[27] DU W L,ZHOU Y,ZHU H C,et al.A Semi-supervised Image-to-image Translation Framework for SAR-optical Image Matching[J].IEEE Geoscience and Remote Sensing Letters,2022,19:4516305.
[28] WEI J,ZOU H X,SUN L,et al.CFRWD-GAN for SAR-to-optical Image Translation[J].Remote Sensing,2023,15(10):2547.
[29] HU X K,ZHANG P Z,BAN Y F,et al.GAN-based SAR and Optical Image Translation for Wildfire Impact Assessment Using Multi-source Remote Sensing Data[J].Remote Sensing of Environment,2023,289:113522.
[30] AMITRANO D.Multitemporal SAR-to-optical Image Translation Using Pix2Pix with Application to Vegetation Monitoring[J].IEEE Access,2024,12:124402-124413.
[31] ZHANG M J,ZHANG P,ZHANG Y H,et al.SAR-to-optical Image Translation via an Interpretable Network[J].Remote Sensing,2024,16(2):242.
[32] GUO Z,ZHANG Z B,CAI Q L,et al.MS-GAN:Lea-rn to Memorize Scene for Unpaired SAR-to-optical Image Translation[J].IEEE Journal of Selected To-pics in Applied Earth Observations and Remote Sensing,2024,17:11467-11484.
[33] ILESANMI A E,ILESANMI T O.Methods for Image Denoising Using Convolutional Neural Network:A Review[J].Complex & Intelligent Systems,2021,7(5):2179-2198.
[34] YE Y X,SHAN J,BRUZZONE L,et al.Robust Re-gistration of Multimodal Remote Sensing Images Ba-sed on Structural Similarity[J].IEEE Transactions on Geoscience and Remote Sensing,2017,55(5):2941-2958.
[35] SCHMITT M,HUGHES L H,ZHU X X.The SEN1-2 Dataset for Deep Learning in SAR-optical Data Fusion[C]∥ISPRS.2018 ISPRS TC Midterm Symposium“Innovative Sensing:From Sensors to Methods and Applications”.Karlsruhe:ISPRS,2018:141-146.
[36] SCHMITT M,HUGHES L H,QIU C,et al.SEN12MS:A Curated Dataset of Georeferenced Multi-spectral Sentinel-1/2 Imagery for Deep Learning and Data Fusion[C]∥ISPRS.2019 PIA19+MRSS19:Photogrammetric Image Analysis & Munich Remote Sensing Sy-mposium.Munich:ISPRS,2019:153-160.
[37] HUANG M Y,XU Y,QIAN L X,et al.The QXS-SAROPT Dataset for Deep Learning in SAR-optical Data Fusion[J].Journal of Latex Class Files,2015,14(8):1-5.
[38] 刘时银,姚晓军,郭万钦,等.基于第二次冰川编目的中国冰川现状[J].地理学报,2015,70(1):3-16.
LIU Shi-yin,YAO Xiao-jun,GUO Wan-qin,et al.The Contemporary Glaciers in China Based on the Second Chinese Glacier Inventory[J].Acta Geographica Sinica,2015,70(1):3-16.
[39] GUO W Q,LIU S Y,XU J L,et al.The Second Chinese Glacier Inventory:Data,Methods and Results[J].Journal of Glaciology,2015,61:357-372.
[40] LEDIG C,THEIS L,HUSZÁR F,et al.Photo-realistic Single Image Super-resolution Using a Generative Adversarial Network[C]∥IEEE.2017 IEEE Confe-rence on Computer Vision and Pattern Recognition.Honolulu:IEEE,2017:105-114.
[41] LUO Y,PI D C.SAR-to-optical Image Translation for Quality Enhancement[J].Journal of Ambient Intelligence and Humanized Computing,2023,14(8):9985-10000.
[42] BISHOP C M.Pattern Recognition and Machine Lear-ning[M].New York:Springer,2006.
[43] ZHANG R,ISOLA P,EFROS A A,et al.The Unreasonable Effectiveness of Deep Features as a Perceptual Metric[C]∥IEEE.2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Salt Lake City:IEEE,2018:586-595.
[44] SIMONYAN K,ZISSERMAN A.Very Deep Convolutional Networks for Large-scale Image Recognition[C]∥DBLP.The 3rd International Conference on Learning Representations.San Diego:DBLP,2015:150-169.
[45] KRIZHEVSKY A,SUTSKEVER I,HINTON G E.ImageNet Classification with Deep Convolutional Ne-ural Networks[J].Communications of the ACM,2017,60(6):84-90.
[46] DE MATTOS S H V L,VICENTE L E,VICENTE A K,et al.Metrics Based on Information Entropy Applied to Evaluate Complexity of Landscape Patterns[J].PLOS One,2022,17(1):e0262680.
[47] TSOKAS A,RYSZ M,PARDALOS P M,et al.SAR Data Applications in Earth Observation:An Overview[J].Expert Systems with Applications,2022,205:117342.
[48] XIANG Y M,WANG F,WAN L,et al.SAR-PC:Ed-ge Detection in SAR Images via an Advanced Phase Congruency Model[J].Remote Sensing,2017,9(3):209.
[49] MORRONE M C,OWENS R A.Feature Detection from Local Energy[J].Pattern Recognition Letters,1987,6:303-313.
[50] 陈 健.基于多源遥感数据的格拉丹东冰川变化监测[D].成都:成都理工大学,2021.
CHEN Jian.Monitoring of Glaciers Change in Gela-dandong Based on Multi-source Remote Sensing[D].Chengdu:Chengdu University of Technology,2021.
[51] 苏家灿.1986~2020年格拉丹东地区冰川变化遥感监测研究[D].昆明:云南师范大学,2023.
SU Jia-can.Study on Remote Sensing Monitoring of Glacier Changes in the Geladandong Region from 1986 to 2020[D].Kunming:Yunnan Normal University,2023.

Memo

Memo:
-
Last Update: 2025-07-25