结论
我们揭示了以下结论:CyeleGAN用于遥感图像生成是可行的,尤其是给没有雪的地面覆盖雪。尽管这个生成结果并不能骗过人的眼睛,但通过对某些区域的详细观察,可以找到一些植入的伪像:这就提示我们做任何操作时都要小心它对后面过程的影响。我们还介绍了一些质量评估的方法,可以用来指导CycleGAN这种非配对训练应该何时停止——虽然只研究了一下同域翻译(RGBRGB),我们预感到,以后可能要用CycleGAN或pix2pix在跨域之间做实验,但就像我们已经说了的那些一样:我们要对这些模型引入的潜在的artifacts做潜在的分析。
致谢
本文得到了洛斯阿拉莫斯实验室研究与开发计划和空间与地球中心的支持。还要感谢笛卡尔实验室的图像和技术支持。最后,还要感谢同志们的建设性的讨论。
引用
[1] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y., “Generative adversarial nets,” in [Advances in Neural Information Processing Systems], 2672– 2680 (2014).
[2] Schmitt, M., Hughes, L. H., and Zhu, X. X., “The SEN1-2 dataset for deep learning in SAR-optical data fusion,” arXiv preprint arXiv:1807.01569 (2018).
[3] Grohnfeldt, C., Schmitt, M., and Zhu, X., “A conditional generative adversarial network to fuse SAR and multispectral optical data for cloud removal from Sentinel-2 images,” in [International Geoscience and Remote Sensing Symposium (IGARSS)], 1726–1729, IEEE (2018).
[4] Ji, G., Wang, Z., Zhou, L., Xia, Y., Zhong, S., and Gong, S., “SAR image colorization using multidomain cycle-consistency generative adversarial network,” IEEE Geoscience and Remote Sensing Letters (2020).
[5] Fuentes Reyes, M., Auer, S., Merkle, N., Henry, C., and Schmitt, M., “SAR-to-optical image translation based on conditional generative adversarial networksoptimization, opportunities and limits,” Remote Sens- ing 11(17), 2067 (2019).
[6] Schmitt, M., Hughes, L. H., Qiu, C., and Zhu, X. X., “SEN12MS–a curated dataset of georeferenced multi- spectral Sentinel-1/2 imagery for deep learning and data fusion,” arXiv preprint arXiv:1906.07789 (2019).
[7] Toriya, H., Dewan, A., and Kitahara, I., “SAR2OPT: Image alignment between multi-modal images using generative adversarial networks,” in [International Geoscience and Remote Sensing Symposium (IGARSS)], 923–926, IEEE (2019).
[8] Mohajerani, S., Asad, R., Abhishek, K., Sharma, N., van Duynhoven, A., and Saeedi, P., “Cloudmaskgan: A content-aware unpaired image-to-image translation algorithm for remote sensing imagery,” in [International Conference on Image Processing (ICIP)], 1965–1969, IEEE (2019).
[9] Ren, C. X., Ziemann, A., Durieux, A., and Theiler, J., “Cycle-consistent adversarial networks for realistic pervasive change generation in remote sensing imagery,” arXiv preprint arXiv:1911.12546 (2019).
[10] Theiler, J. and Perkins, S., “Proposed framework for anomalous change detection,” in [ICML Workshop on Machine Learning Algorithms for Surveillance and Event Detection], 7–14 (2006).
[11] Goodfellow, I., “NIPS 2016 tutorial: Generative adversarial networks,” arXiv preprint arXiv:1701.00160 (2016).
[12] Isola, P., Zhu, J.-Y., Zhou, T., and Efros, A. A., “Image-to-image translation with conditional adversarial networks,” in [Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR)], 1125–1134 (2017).20–22
[13] Zhu, J.-Y., Park, T., Isola, P., and Efros, A. A., “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in [Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR)], 2223–2232 (2017).
[14] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A., “Going deeper with convolutions,” in [Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR)], 1–9 (2015).
[15] Dowson, D. C. and Landau, B. V., “The Fr ́echet distance between multivariate normal distributions,” Journal of Multivariate Analysis 12(3), 450–455 (1982).
[16] Vaserstein, L. N., “Markov processes over denumerable products of spaces, describing large systems of automata,” Problemy Peredachi Informatsii 5(3), 64–72 (1969).
[17] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S., “GANs trained by a two timescale update rule converge to a local Nash equilibrium,” in [Advances in Neural Information Processing Systems], 6626–6637 (2017).
[18] He, K., Zhang, X., Ren, S., and Sun, J., “Deep residual learning for image recognition,” in [Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR)], 770–778 (2016).
[19] Keisler, R., Skillman, S. W., Gonnabathula, S., Poehnelt, J., Rudelis, X., and Warren, M. S., “Visual search over billions of aerial and satellite images,” Computer Vision and Image Understanding 187, 102790 (2019).
[20] Longbotham, N., Pacifici, F., Glenn, T., Zare, A., Volpi, M., Tuia, D., Christophe, E., Michel, J., Inglada, J., Chanussot, J., et al., “Multi-modal change detection, application to the detection of flooded areas: Outcome of the 2009–2010 data fusion contest,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 5(1), 331–342 (2012).
[21] Ziemann, A., Ren, C. X., and Theiler, J., “Multi-sensor anomalous change detection at scale,” in [Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXV], 10986, 1098615,
International Society for Optics and Photonics (2019).
[22] Touati, R., Mignotte, M., and Dahmane, M., “Multimodal change detection in remote sensing images using an unsupervised pixel pairwise-based markov random field model,” IEEE Trans. Image Processing 29, 757– 767 (2019).