CN106897681B - Remote sensing image contrast analysis method and system - Google Patents

Remote sensing image contrast analysis method and system Download PDF

Info

Publication number
CN106897681B
CN106897681B CN201710080906.2A CN201710080906A CN106897681B CN 106897681 B CN106897681 B CN 106897681B CN 201710080906 A CN201710080906 A CN 201710080906A CN 106897681 B CN106897681 B CN 106897681B
Authority
CN
China
Prior art keywords
remote sensing
images
convolution
sensing images
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710080906.2A
Other languages
Chinese (zh)
Other versions
CN106897681A (en
Inventor
涂刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Xienzhuo Technology Co ltd
Original Assignee
Wuhan Xienzhuo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Xienzhuo Technology Co ltd filed Critical Wuhan Xienzhuo Technology Co ltd
Priority to CN201710080906.2A priority Critical patent/CN106897681B/en
Publication of CN106897681A publication Critical patent/CN106897681A/en
Application granted granted Critical
Publication of CN106897681B publication Critical patent/CN106897681B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Abstract

The invention relates to a remote sensing image contrast analysis method and a system, wherein the method comprises the following steps: s1: respectively identifying and segmenting the ground objects in the two remote sensing images shot in different time in the same region through a full convolution network to obtain segmented images of all the ground objects in the two remote sensing images, wherein the full convolution network comprises a plurality of convolution layer groups and a plurality of deconvolution layers, and the convolution layer groups comprise convolution layers and loose convolution layers which are arranged alternately; s2: and carrying out comparative analysis on the segmentation images of the same ground object in the two remote sensing images to obtain a comparative analysis result. The invention has the beneficial effects that: the technical scheme has better fault tolerance to interference factors such as atmosphere, season and the like, and has higher recognition rate to intensive ground objects.

Description

Remote sensing image contrast analysis method and system
Technical Field
The invention relates to the technical field of remote sensing image contrast analysis, in particular to a remote sensing image contrast analysis method and system.
Background
The comparison and analysis of remote sensing images in different periods is also called change detection, is a key technology of a geographic information system, and has very important functions in the fields of land planning, disaster prevention and control, unmanned aerial vehicles, satellites, unmanned ships and resource monitoring. The traditional pixel-based comparison algorithm cannot well eliminate the interference in the remote sensing image and cannot realize the classified comparison of the ground objects in the remote sensing image. The existing image comparison method is to directly compare two images, and the comparison result is rough and inaccurate.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the traditional pixel-based comparison algorithm cannot well eliminate interference in the remote sensing image, cannot realize classification comparison of ground objects in the remote sensing image, and has rough and inaccurate comparison results.
The technical scheme for solving the technical problems is as follows:
a method of remote sensing image contrast analysis, comprising:
s1: respectively identifying and segmenting ground objects in two remote sensing images shot in the same region at different time through a full convolution network to obtain segmented images of all ground objects in the two remote sensing images, wherein the full convolution network comprises a plurality of convolution layer sets and a plurality of deconvolution layers, and the convolution layer sets comprise convolution layers and loose convolution layers which are arranged alternately;
s2: and carrying out comparative analysis on the segmentation images of the same ground object in the two remote sensing images to obtain a comparative analysis result.
The invention has the beneficial effects that: the method comprises the steps of extracting features of images to be compared through parameter results of convolutional network training and classifying the images pixel by pixel, wherein the classification results are images with different pixel values filled in different ground objects, so that the accurate edges of the different ground objects are marked while the different ground objects are separated, then, the remote sensing images in the same area at different times are compared and analyzed on the basis of the classification results, namely the images with different pixel values filled in the different ground objects, and whether the two times in the certain area change or not is obtained through comparison.
On the basis of the technical scheme, the invention can be further improved as follows.
Preferably, the step S1 includes:
s11: respectively putting the two remote sensing images into the full convolution network;
s12: respectively fusing images of the two remote sensing images marked by the coordinate points of the at least one convolution layer group with images marked by the coordinate points of the at least one convolution layer group and the deconvolution layer for multiple times to obtain fused images;
s13: respectively fusing the two remote sensing images and the fused image for multiple times after the two remote sensing images and the fused image are marked by at least one deconvolution layer coordinate point to obtain a ground feature classification probability map;
s14: and respectively segmenting the ground features in the two ground feature classification probability maps through a CRF probability model to obtain segmented images of all the ground features in the two remote sensing images.
The beneficial effect of adopting the further scheme is that: the full convolution network replaces the full connection of the traditional network with convolution, adds an anti-convolution layer, and blends the results of the first layers of the network with the final result of the network to obtain more image information; and distinguishing the ground object target from the background through a CRF (probabilistic fuzzy C-means) probability model to obtain a segmentation image of each ground object so as to perform further contrast analysis.
Preferably, in step S2, the segmented images of the same feature in the two remote sensing images are compared and analyzed one by one through a contrast neural network, so as to obtain a comparison and analysis result.
Preferably, the contrast neural network is a 2-channel network or a siemese network.
The beneficial effect of adopting the further scheme is that: comparing and analyzing two remote sensing images in the same area at different time, and taking a result image obtained by segmenting the two compared images as two channels by using a 2-channel to directly put into a neural network for comparison; the Simese network has two networks sharing parameters, the two result images are respectively used as the input of the networks, and the comparison result is obtained after the characteristics are extracted.
A remote sensing image contrast analysis system, comprising:
the segmentation module is used for respectively identifying and segmenting the ground features in two remote sensing images shot in the same region at different times through a full convolution network to obtain segmented images of all the ground features in the two remote sensing images, wherein the full convolution network comprises a plurality of convolution layer groups and a plurality of deconvolution layers, and the convolution layer groups comprise convolution layers and sparse convolution layers which are arranged alternately;
and the comparison module is used for carrying out comparison analysis on the segmentation images of the same ground object in the two remote sensing images one by one to obtain a comparison analysis result.
Preferably, the segmentation module comprises:
the putting-in submodule is used for respectively putting the two remote sensing images into a full convolution network;
the first fusion submodule is used for fusing images of the two remote sensing images marked by the coordinate points of the at least one convolution layer group with images marked by all the convolution layer groups and the coordinate points of the at least one deconvolution layer for multiple times to obtain fused images;
the second fusion submodule is used for fusing the two remote sensing images and the fused image for multiple times after the images are marked by at least one deconvolution layer coordinate point respectively to obtain a ground feature classification probability map;
and the segmentation submodule is used for segmenting the ground features in the two ground feature classification probability maps respectively through a CRF probability model to obtain segmentation images of all the ground features in the two remote sensing images.
Preferably, the comparison module is specifically configured to perform comparison analysis on the segmented images of the same ground object in the two remote sensing images one by one through a comparison neural network to obtain a comparison analysis result.
Preferably, the contrast neural network is a 2-channel network or a siemese network.
Drawings
Fig. 1 is a schematic flow chart of a remote sensing image contrast analysis method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a remote sensing image contrast analysis method according to another embodiment of the present invention;
fig. 3 is a schematic structural diagram of a remote sensing image contrast analysis system according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a remote sensing image contrast analysis system according to another embodiment of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1, an embodiment of the present invention provides a remote sensing image contrast analysis method, including:
s1: respectively identifying and segmenting the ground objects in the two remote sensing images shot in different time in the same region through a full convolution network to obtain segmented images of all the ground objects in the two remote sensing images, wherein the full convolution network comprises a plurality of convolution layer groups and a plurality of deconvolution layers, and the convolution layer groups comprise convolution layers and loose convolution layers which are arranged alternately;
s2: and carrying out comparative analysis on the segmentation images of the same ground object in the two remote sensing images to obtain a comparative analysis result.
Specifically, in this embodiment, the features of the images to be compared are extracted through the parameter result of the convolutional network training and are classified pixel by pixel, the classification result is an image in which different ground objects are filled with different pixel values, so that accurate edges of different ground objects are marked while different types of ground objects are separated, then remote sensing images of the same region at different times are compared and analyzed on the basis of the classification result, that is, the image in which different pixel values are filled with different ground objects, and whether two times of a certain region change or not is obtained through comparison.
In the embodiment, a plurality of data enhancement methods are adopted in the convolutional network training process, so that higher training accuracy is achieved under the condition of less labeled data, wherein the adopted data enhancement methods comprise data rotation, mirror image and the like, and the images are mirrored or rotated, so that a data set can be effectively enlarged, the network training quality is improved, and under-fitting is prevented.
As shown in fig. 2, in another embodiment, step S1 in fig. 1 includes:
s11: respectively putting the two remote sensing images into a full convolution network;
s12: respectively fusing images of the two remote sensing images after being marked by at least one convolution layer group coordinate point with images after being marked by all convolution layer groups and at least one deconvolution layer coordinate point for multiple times to obtain fused images;
s13: respectively fusing the two remote sensing images and the fused image for multiple times after the fused image is marked by at least one deconvolution layer coordinate point to obtain a ground feature classification probability map;
s14: and respectively segmenting the ground features in the two ground feature classification probability maps through a CRF probability model to obtain segmented images of all the ground features in the two remote sensing images.
Specifically, in the embodiment, the full convolution network replaces the full connection of the traditional network with convolution, adds an anti-convolution layer, and blends the results of the first layers of the network with the final result of the network to obtain more image information; and distinguishing the ground object target from the background through a CRF (probabilistic fuzzy C-means) probability model to obtain a segmentation image of each ground object so as to perform further contrast analysis. The CRF (conditional random field) combines the characteristics of a maximum entropy model and a hidden Markov model, is an undirected graph model, and has good effect in sequence labeling tasks such as word segmentation, part of speech labeling, named entity recognition and the like in recent years. CRF is a typical discriminant model.
In step S2, the segmented images of the same feature in the two remote sensing images are compared and analyzed one by one through a contrast neural network, and a comparison and analysis result is obtained.
The contrast neural network is a 2-channel network or a siemese network.
Specifically, in the embodiment, two remote sensing images in the same area at different times are compared and analyzed, and a 2-channel takes a result image obtained by segmenting the two compared images as two channels and directly puts the two channels into a neural network for comparison; the Simese network has two networks sharing parameters, the two result images are respectively used as the input of the networks, and the comparison result is obtained after the characteristics are extracted.
As shown in fig. 3, an embodiment of the present invention further provides a remote sensing image contrast analysis system, including:
the segmentation module 1 is used for respectively identifying and segmenting the ground features in the two remote sensing images shot in different time in the same region through a full convolution network to obtain segmented images of all the ground features in the two remote sensing images, wherein the full convolution network comprises a plurality of convolution layer groups and a plurality of deconvolution layers, and the convolution layer groups comprise convolution layers and sparse convolution layers which are arranged alternately;
and the comparison module 2 is used for comparing and analyzing the segmentation images of the same ground object in the two remote sensing images to obtain a comparison and analysis result.
As shown in fig. 4, in another embodiment, the segmentation module 1 in fig. 3 includes:
the putting-in submodule 11 is used for respectively putting the two remote sensing images into a full convolution network;
the first fusion submodule 12 is used for respectively fusing the images of the two remote sensing images marked by the coordinate points of at least one convolution layer group with the images marked by all the convolution layer groups and at least one deconvolution layer coordinate point for multiple times to obtain fused images;
the third fusion submodule 13 is used for fusing the two remote sensing images and the fused image for multiple times after the fused image is marked by at least one deconvolution layer coordinate point to obtain a ground feature classification probability map;
and the segmentation submodule 14 is used for identifying and segmenting the surface features in the two surface feature classification probability maps through a CRF probability model to obtain segmentation images of all the surface features in the two remote sensing images.
The comparison module 2 is specifically configured to perform comparison analysis on the segmented images of the same ground object in the two remote sensing images one by one through a comparison neural network to obtain a comparison analysis result.
The contrast neural network is a 2-channel network or a siemese network.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (4)

1. A method for remote sensing image contrast analysis, comprising:
s1: respectively identifying and segmenting ground objects in two remote sensing images shot in the same region at different time through a full convolution network to obtain segmented images of all ground objects in the two remote sensing images, wherein the full convolution network comprises a plurality of convolution layer sets and a plurality of deconvolution layers, and the convolution layer sets comprise convolution layers and loose convolution layers which are arranged alternately;
s2: carrying out comparative analysis on the segmentation images of the same ground object in the two remote sensing images to obtain comparative analysis results;
the step S1 includes:
s11: respectively putting the two remote sensing images into the full convolution network;
s12: respectively fusing images of the two remote sensing images marked by the coordinate points of the at least one convolution layer group with images marked by the coordinate points of the at least one convolution layer group and the deconvolution layer for multiple times to obtain fused images;
s13: respectively fusing the two remote sensing images and the fused image for multiple times after the two remote sensing images and the fused image are marked by at least one deconvolution layer coordinate point to obtain a ground feature classification probability map;
s14: respectively segmenting the ground features in the two ground feature classification probability maps through a CRF probability model to obtain segmentation images of all the ground features in the two remote sensing images;
in step S2, the segmented images of the same feature in the two remote sensing images are compared and analyzed one by one through a contrast neural network, so as to obtain a comparison and analysis result.
2. The remote sensing image contrast analysis method of claim 1, wherein the contrast neural network is a 2-channel network or a siemese network.
3. A remote sensing image contrast analysis system, comprising:
the segmentation module (1) is used for respectively identifying and segmenting all ground features in two remote sensing images shot in different time in the same region through a full convolution network to obtain a segmented image of each ground feature in the two remote sensing images, wherein the full convolution network comprises a plurality of convolution layer sets and a plurality of deconvolution layers, and the convolution layer sets comprise convolution layers and loose convolution layers which are arranged alternately;
the comparison module (2) is used for carrying out comparison analysis on the segmentation images of the same ground object in the two remote sensing images one by one to obtain comparison analysis results;
the segmentation module (1) comprises:
the putting-in submodule (11) is used for respectively putting the two remote sensing images into a full convolution network;
the first fusion submodule (12) is used for fusing images of the two remote sensing images marked by the coordinate points of at least one convolution layer group with images marked by all the convolution layer groups and at least one deconvolution layer coordinate point for multiple times to obtain fused images;
the second fusion submodule (13) is used for fusing the two remote sensing images and the image of the fusion image after the fusion image is marked by at least one deconvolution layer coordinate point for multiple times to obtain a ground feature classification probability map;
the segmentation submodule (14) is used for identifying and segmenting the surface features in the two surface feature classification probability maps through a CRF probability model to obtain segmentation images of all the surface features in the two remote sensing images;
the comparison module (2) is used for carrying out comparison analysis on the segmentation images of the same ground object in the two remote sensing images through a comparison neural network to obtain comparison analysis results.
4. The remotely sensed image contrast analysis system of claim 3, wherein the contrast neural network is a 2-channel network or a siemens network.
CN201710080906.2A 2017-02-15 2017-02-15 Remote sensing image contrast analysis method and system Active CN106897681B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710080906.2A CN106897681B (en) 2017-02-15 2017-02-15 Remote sensing image contrast analysis method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710080906.2A CN106897681B (en) 2017-02-15 2017-02-15 Remote sensing image contrast analysis method and system

Publications (2)

Publication Number Publication Date
CN106897681A CN106897681A (en) 2017-06-27
CN106897681B true CN106897681B (en) 2020-11-10

Family

ID=59198665

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710080906.2A Active CN106897681B (en) 2017-02-15 2017-02-15 Remote sensing image contrast analysis method and system

Country Status (1)

Country Link
CN (1) CN106897681B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171122A (en) * 2017-12-11 2018-06-15 南京理工大学 The sorting technique of high-spectrum remote sensing based on full convolutional network
CN108537824B (en) * 2018-03-15 2021-07-16 上海交通大学 Feature map enhanced network structure optimization method based on alternating deconvolution and convolution
CN108776805A (en) * 2018-05-03 2018-11-09 北斗导航位置服务(北京)有限公司 It is a kind of establish image classification model, characteristics of image classification method and device
CN108961236B (en) * 2018-06-29 2021-02-26 国信优易数据股份有限公司 Circuit board defect detection method and device
CN109409263B (en) * 2018-10-12 2021-05-04 武汉大学 Method for detecting urban ground feature change of remote sensing image based on Siamese convolutional network
CN109711311B (en) * 2018-12-20 2020-11-20 北京以萨技术股份有限公司 Optimal frame selection method based on dynamic human face
CN110570397B (en) * 2019-08-13 2020-12-04 创新奇智(重庆)科技有限公司 Method for detecting ready-made clothes printing defects based on deep learning template matching algorithm
CN110781948A (en) * 2019-10-22 2020-02-11 北京市商汤科技开发有限公司 Image processing method, device, equipment and storage medium
CN111274938B (en) * 2020-01-19 2023-07-21 四川省自然资源科学研究院 Web-oriented high-resolution remote sensing river water quality dynamic monitoring method and system
CN112816000A (en) * 2021-02-26 2021-05-18 华南理工大学 Comprehensive index evaluation method and system for indoor and outdoor wind environment quality of green building group
CN116977747B (en) * 2023-08-28 2024-01-23 中国地质大学(北京) Small sample hyperspectral classification method based on multipath multi-scale feature twin network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102147920A (en) * 2011-03-02 2011-08-10 上海大学 Shadow detection method for high-resolution remote sensing image
CN102855759A (en) * 2012-07-05 2013-01-02 中国科学院遥感应用研究所 Automatic collecting method of high-resolution satellite remote sensing traffic flow information
CN105809693A (en) * 2016-03-10 2016-07-27 西安电子科技大学 SAR image registration method based on deep neural networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102147920A (en) * 2011-03-02 2011-08-10 上海大学 Shadow detection method for high-resolution remote sensing image
CN102855759A (en) * 2012-07-05 2013-01-02 中国科学院遥感应用研究所 Automatic collecting method of high-resolution satellite remote sensing traffic flow information
CN105809693A (en) * 2016-03-10 2016-07-27 西安电子科技大学 SAR image registration method based on deep neural networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
全卷积网络结合改进的条件随机场-循环神经网络用于SAR图像场景分类;汤浩,何楚;《计算机应用》;20161210;第36卷(第12期);第3436-3441页 *
遥感图像变化检测方法研究;张凤玉;《中国优秀硕士学位论文全文数据库信息科技辑》;20101215(第 12 期);I140-320 *

Also Published As

Publication number Publication date
CN106897681A (en) 2017-06-27

Similar Documents

Publication Publication Date Title
CN106897681B (en) Remote sensing image contrast analysis method and system
Laddha et al. Map-supervised road detection
Chen et al. Vehicle detection in high-resolution aerial images via sparse representation and superpixels
CN108846835B (en) Image change detection method based on depth separable convolutional network
Dornaika et al. Building detection from orthophotos using a machine learning approach: An empirical study on image segmentation and descriptors
CN109086668B (en) Unmanned aerial vehicle remote sensing image road information extraction method based on multi-scale generation countermeasure network
CN106910202B (en) Image segmentation method and system for ground object of remote sensing image
CN110175615B (en) Model training method, domain-adaptive visual position identification method and device
CN108305260B (en) Method, device and equipment for detecting angular points in image
CN106408030A (en) SAR image classification method based on middle lamella semantic attribute and convolution neural network
CN111340855A (en) Road moving target detection method based on track prediction
CN111274926B (en) Image data screening method, device, computer equipment and storage medium
US10685443B2 (en) Cloud detection using images
CN110648310A (en) Weak supervision casting defect identification method based on attention mechanism
CN113838064B (en) Cloud removal method based on branch GAN using multi-temporal remote sensing data
CN114998744B (en) Agricultural machinery track field dividing method and device based on motion and vision dual-feature fusion
CN109360191B (en) Image significance detection method based on variational self-encoder
CN112488229A (en) Domain self-adaptive unsupervised target detection method based on feature separation and alignment
CN111950498A (en) Lane line detection method and device based on end-to-end instance segmentation
CN106897683B (en) Ground object detection method and system of remote sensing image
CN111242028A (en) Remote sensing image ground object segmentation method based on U-Net
CN107170004B (en) Image matching method for matching matrix in unmanned vehicle monocular vision positioning
CN109934147B (en) Target detection method, system and device based on deep neural network
CN107230201B (en) Sample self-calibration ELM-based on-orbit SAR (synthetic aperture radar) image change detection method
CN110826432B (en) Power transmission line identification method based on aviation picture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant