CN115830448B - Remote sensing image comparison analysis method based on multi-view fusion - Google Patents

Remote sensing image comparison analysis method based on multi-view fusion Download PDF

Info

Publication number
CN115830448B
CN115830448B CN202211518990.9A CN202211518990A CN115830448B CN 115830448 B CN115830448 B CN 115830448B CN 202211518990 A CN202211518990 A CN 202211518990A CN 115830448 B CN115830448 B CN 115830448B
Authority
CN
China
Prior art keywords
remote sensing
change
target object
region
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211518990.9A
Other languages
Chinese (zh)
Other versions
CN115830448A (en
Inventor
范儒彬
马荣华
梁华
李晓威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Fu'an Digital Technology Co ltd
Guangzhou Geological Survey Institute (guangzhou Geology Environment Monitoring Center)
Original Assignee
Guangzhou Fu'an Digital Technology Co ltd
Guangzhou Geological Survey Institute (guangzhou Geology Environment Monitoring Center)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Fu'an Digital Technology Co ltd, Guangzhou Geological Survey Institute (guangzhou Geology Environment Monitoring Center) filed Critical Guangzhou Fu'an Digital Technology Co ltd
Priority to CN202211518990.9A priority Critical patent/CN115830448B/en
Publication of CN115830448A publication Critical patent/CN115830448A/en
Application granted granted Critical
Publication of CN115830448B publication Critical patent/CN115830448B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides a remote sensing image contrast analysis method based on multi-view fusion, which comprises the following steps: respectively carrying out target detection on all target objects in two remote sensing images in different time periods in the same region by using the optimized target detection model to obtain information sets of all target objects respectively corresponding to the two remote sensing images; performing primary transformation detection on the remote sensing image graph by using a multi-feature hierarchical CVA change detection method to obtain a change region set; based on an improved U-Net network model, sea-land segmentation is carried out on the two remote sensing images, and sea-land segmentation results are obtained; and performing secondary change detection according to the information set, the change area set and the sea-land segmentation result to obtain a final change detection result. The invention realizes the change detection of the remote sensing image from the whole and local angles, reduces the missing detection information of the change detection result, and effectively improves the accuracy of the change detection.

Description

Remote sensing image comparison analysis method based on multi-view fusion
Technical Field
The invention relates to the technical field of image change, in particular to a remote sensing image contrast analysis method based on multi-view fusion.
Background
With rapid development of remote sensing technology, remote sensing images have been widely used in various fields, such as land use monitoring, forest monitoring, urban monitoring, natural disaster assessment and analysis, and the like.
Contrast analysis of remote sensing images over different periods of time is also known as change detection. The remote sensing image change detection analyzes two or more remote sensing images of the same area and different phases through a related algorithm, finds out a change area, can provide large-scale change information of the earth surface for people, is one of important research directions of remote sensing technology, and plays a vital role in various fields such as landform feature monitoring, natural disaster monitoring, environment monitoring, forest resource monitoring and the like.
The method based on target detection can rapidly judge the change condition of the position by identifying the target object in the remote sensing image and analyzing the type of the target object at the same position in different time remote sensing images, but is influenced by the accuracy of target detection model identification, and can cause false detection, omission detection and false detection.
The sea Liu Fenge is carried out on the remote sensing image based on the neural network, and the change condition of the coastline can be intuitively found by carrying out contrast analysis according to the obtained sea-land boundary, so that the contrast analysis of the coastline of the remote sensing image plays a vital role, but the global contrast analysis of the remote sensing image cannot be realized.
The CVA change detection method based on multi-feature classification is characterized in that a difference image is constructed, change information is extracted from the difference image through a threshold method, most threshold judgment algorithms have obvious limitations, and once the histogram of the difference image does not meet the assumed distribution, the change cannot be effectively judged, so that missing detection and false detection are caused, and the robustness of the method is affected.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a remote sensing image contrast analysis method based on multi-view fusion.
In order to achieve the above object, the present invention provides the following solutions:
a remote sensing image contrast analysis method based on multi-view fusion comprises the following steps:
respectively carrying out target detection on all target objects in two remote sensing images in different time periods in the same region by using the optimized target detection model to obtain information sets of all target objects respectively corresponding to the two remote sensing images;
performing primary transformation detection on the remote sensing image graph by using a multi-feature hierarchical CVA change detection method to obtain a change region set;
based on an improved U-Net network model, sea-land segmentation is carried out on the two remote sensing images, and sea-land segmentation results are obtained;
and performing secondary change detection according to the information set, the change area set and the sea-land segmentation result to obtain a final change detection result.
Preferably, the optimization method of the target detection model includes:
collecting an open-source remote sensing image data set, and marking a remote sensing image graph in the remote sensing image data set by using a marking tool to form a data set with an identification frame;
randomly dividing the data set with the identification frame into a first training set and a first test set;
based on a random gradient descent SGD method, inputting the first training set into an original detection model for training, and testing the trained initial model according to the first testing set to obtain an initial model;
performing image preprocessing operation on two remote sensing images in the same region and at different time intervals, and inputting the remote sensing images subjected to the image preprocessing into the initial model for target detection to obtain an initial information set;
removing target object information except the target object information which is at the same position and has the same type in the initial information set in the two remote sensing images after target detection to obtain a new remote sensing image;
and inputting the new remote sensing image into the initial model to perform a multi-round training optimization process to obtain the optimized target detection model.
Preferably, the method for detecting the change of the CVA by using multi-feature grading performs a transformation detection on the remote sensing image map to obtain a change region set, including:
performing multi-scale image segmentation on the remote sensing image based on an image segmentation method of the region adjacency graph so as to extract features under each scale;
random forest feature selection is carried out on the features extracted under each scale, and hierarchical CVA change detection is carried out by utilizing the optimized feature vectors, so that a binary change detection result is obtained;
and determining the change region set according to the binary change detection result.
Preferably, the sea-land segmentation is performed on two remote sensing images based on the improved U-Net network model to obtain sea-land segmentation results, including:
acquiring an open source remote sensing image, and performing image preprocessing on the open source remote sensing image to obtain a preprocessed image;
interpreting the preprocessed image by using arcmap software to obtain a binary segmentation sample diagram; green pixels in the binary segmentation sample graph represent land and blue pixels represent sea;
determining a second training set, a second testing set and a verification set according to the binary segmentation sample diagram;
training, testing and verifying an initial segmentation model according to the second training set, the second testing set and the verification set to obtain the trained improved U-Net network model;
inputting the two remote sensing images into the improved U-Net network model, and outputting to obtain a sea-land segmentation result diagram; the sea-land segmentation result graph comprises two remote sensing images with dividing lines and a dividing line set.
Preferably, the performing secondary change detection according to the information set, the change region set and the sea-land segmentation result to obtain a final change detection result includes:
according to the two information sets, performing contrast analysis on corresponding target objects in the two remote sensing images to obtain an analysis result; the analysis result comprises that the same position detects the target object, the types are consistent, the same position detects the target object, the types are inconsistent, and the same position detects only one target object;
determining the positions of the target objects which are detected by the same position and have inconsistent types and the target object which is detected by the same position and only one target object is a change area, and obtaining a target object position set with large difference;
performing difference processing on the two remote sensing images with the dividing lines to obtain a difference image;
obtaining a position set with large boundary difference according to the positions with the difference in the difference images;
respectively finding deflection positions in the position sets with large boundary differences in the two remote sensing images with the boundary;
judging whether the deviation position deviates to the sea area and whether the deviation position deviates to the land according to the sequence of the remote sensing image generation time by combining the dividing line to obtain a judging result;
analyzing whether the position corresponding to the difference is reduced in the land area or the sea area according to the judging result;
performing grid division on the remote sensing image, and setting a threshold value;
comparing each target object position in the target object position set with the corresponding region in the change region set, and finding a change region comprehensive set with the overlapping ratio larger than the threshold value; the comprehensive change area set comprises a target object position subset and a change area subset;
comparing the position set with large boundary difference with the change area set in a cooperative manner to find a change position comprehensive set with the overlap ratio larger than the threshold value; the change position comprehensive set comprises a difference position subset and a change position subset;
converting the difference position subset and the change position subset from the point set to a set in a frame form respectively to obtain a first conversion set and a second conversion set;
combining the target object position subset with the first conversion set to obtain a first combined set;
combining the change region subset with the second conversion set to obtain a second combined set;
labeling the change areas with the coincidence ratio smaller than the threshold value in the two remote sensing images in the form of identification frames;
carrying out matting processing on the part of the area where the identification frame is positioned by using a matting technology so as to realize separation of the identification frame and the background, thereby obtaining a new remote sensing image only comprising the identification frame area;
performing difference on the two new remote sensing images to obtain an identification area set;
and merging the first merging set, the second merging set and the identification area set to obtain the final change detection result.
Preferably, the calculation formula of the coincidence ratio is:
wherein Ω i For the overlap ratio of the ith variation region, M i For the number of meshes with the ith variation region overlapping, O d For the target object position set with large difference, L d Z is the position set with large difference of the dividing line d For the set of change regions, a i Is O d Or L d The number of meshes occupied by the ith variation region, b i Is Z d Middle and O d Or L d The number of meshes occupied by the corresponding area, len () is the length of the calculation set, and Q is the total number of change areas.
Preferably, the change region in which the coincidence ratio is smaller than the threshold value includes:
the set of differences between the set of widely-differing target object positions and the subset of target object positions, the set of differences between the set of widely-differing positions and the subset of change regions, and the set of differences between the set of change regions and the second merged set.
Preferably, the comparing each target object position in the target object position set with the corresponding region in the change region set to find a change region comprehensive set with a overlap ratio greater than the threshold value includes:
calculating and recording the grid number occupied by each target object in the target object position set with large difference and the grid number occupied by the corresponding area in the change area set, comparing the grid occupied by each target object with the grid occupied by the corresponding position in the change area set, and calculating the first grid number overlapped by each change area between the grid number and the grid number;
calculating the coincidence ratio of each change area according to the first grid number;
and screening out a change region with the overlapping ratio larger than a threshold value from the target object position set and the change region set with large difference to obtain the target object position subset and the change region subset.
Preferably, the step of comparing the set of positions with large differences in the dividing line with the set of change regions to find a comprehensive set of change positions with a combination ratio greater than the threshold value includes:
dividing the remote sensing image into a plurality of areas with the same size, calculating and recording the grid number occupied by the position set with large boundary difference in each area and the grid number occupied by the change area set in each area, comparing the same areas, and calculating the second grid number overlapped in the same areas;
calculating the coincidence ratio of each region according to the second grid number;
and screening out the region with the overlapping ratio larger than a threshold value from the position set with large boundary difference and the change region set, and reserving overlapped grid positions to obtain the difference position subset and the change position subset.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention provides a remote sensing image contrast analysis method based on multi-view fusion, which comprises the following steps: respectively carrying out target detection on all target objects in two remote sensing images in different time periods in the same region by using the optimized target detection model to obtain information sets of all target objects respectively corresponding to the two remote sensing images; performing primary transformation detection on the remote sensing image graph by using a multi-feature hierarchical CVA change detection method to obtain a change region set; based on an improved U-Net network model, sea-land segmentation is carried out on the two remote sensing images, and sea-land segmentation results are obtained; and performing secondary change detection according to the information set, the change area set and the sea-land segmentation result to obtain a final change detection result. The invention realizes the change detection of the remote sensing image from the whole and local angles, reduces the missing detection information of the change detection result, and effectively improves the accuracy of the change detection.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of comparative analysis provided in an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The terms "first," "second," "third," and "fourth" and the like in the description and in the claims of this application and in the drawings, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, inclusion of a list of steps, processes, methods, etc. is not limited to the listed steps but may alternatively include steps not listed or may alternatively include other steps inherent to such processes, methods, products, or apparatus.
The invention aims to provide a remote sensing image comparison analysis method based on multi-view fusion, which realizes the change detection of remote sensing images from two angles of whole and local, reduces the missing detection information of a change detection result and effectively improves the accuracy of the change detection.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
Fig. 1 is a flowchart of a method provided by an embodiment of the present invention, and as shown in fig. 1, the present invention provides a remote sensing image contrast analysis method based on multi-view fusion, including:
step 100: respectively carrying out target detection on all target objects in two remote sensing images in different time periods in the same region by using the optimized target detection model to obtain information sets of all target objects respectively corresponding to the two remote sensing images;
step 200: performing primary transformation detection on the remote sensing image graph by using a multi-feature hierarchical CVA change detection method to obtain a change region set;
step 300: based on an improved U-Net network model, sea-land segmentation is carried out on the two remote sensing images, and sea-land segmentation results are obtained;
step 400: and performing secondary change detection according to the information set, the change area set and the sea-land segmentation result to obtain a final change detection result.
Fig. 2 is a schematic diagram of comparative analysis provided in the embodiment of the present invention, as shown in fig. 2, a first comparative analysis flow of the present embodiment is: respectively carrying out target detection on all target objects in two remote sensing images in the same region and different time periods, firstly carrying out preprocessing operations such as radiation correction, geometric correction, image fusion, image mosaic, image cutting and the like on the remote sensing images, taking the processed remote sensing images as input of a target detection model, and carrying out target detection on the remote sensing images by using Yolov5 to obtain an information set O of all target objects in the two remote sensing images 1 ={(type i ,top i ,left i ,right i ,bottom i )|i≤M 1 Sum of O 2 ={(type j ,top j ,left j ,right j ,bottom j )|j≤M 2 The type is the type of the target object, (top, left, right, bottom) is the position information of the target object, i is the i-th target object in the first remote sensing image, j is the j-th target object in the second remote sensing image, M 1 ,M 2 The total number of all the target objects detected in the first remote sensing image and the total number of all the target objects detected in the second remote sensing image are respectively, and the information of each target object comprises type and position information. Training a target detection model, wherein the specific process is as follows:
and (3) collecting an open-source remote sensing image data set, marking the remote sensing image by using a marking tool to form a data set with an identification frame, and randomly dividing the data set with the identification frame into a training set and a testing set, wherein the training set is used as the input of a target detection model to train the model, and a random gradient descent SGD method is adopted to optimize the learning rate in the training process, so that the target detection model with high accuracy and good robustness is obtained.
Preferably, the optimization method of the target detection model includes:
collecting an open-source remote sensing image data set, and marking a remote sensing image graph in the remote sensing image data set by using a marking tool to form a data set with an identification frame;
randomly dividing the data set with the identification frame into a first training set and a first test set;
based on a random gradient descent SGD method, inputting the first training set into an original detection model for training, and testing the trained initial model according to the first testing set to obtain an initial model;
performing image preprocessing operation on two remote sensing images in the same region and at different time intervals, and inputting the remote sensing images subjected to the image preprocessing into the initial model for target detection to obtain an initial information set;
removing target object information except the target object information which is at the same position and has the same type in the initial information set in the two remote sensing images after target detection to obtain a new remote sensing image;
and inputting the new remote sensing image into the initial model to perform a multi-round training optimization process to obtain the optimized target detection model.
Specifically, the second flow of the embodiment is to adaptively optimize the target detection model, improve the accuracy of target detection model detection, preserve the same-position and same-type target object information in two remote sensing images after target detection, remove other non-conditional target object information, and obtain a new remote sensing image as the input of the model, thereby continuously training and optimizing the target detection model, realizing the adaptive optimization of the target detection model, and improving the accuracy of target detection model for identifying the target object.
Preferably, the method for detecting the change of the CVA by using multi-feature grading performs a transformation detection on the remote sensing image map to obtain a change region set, including:
performing multi-scale image segmentation on the remote sensing image based on an image segmentation method of the region adjacency graph so as to extract features under each scale;
random forest feature selection is carried out on the features extracted under each scale, and hierarchical CVA change detection is carried out by utilizing the optimized feature vectors, so that a binary change detection result is obtained;
and determining the change region set according to the binary change detection result.
Further, in this embodiment, the process three-dimensionally uses a multi-feature hierarchical CVA change detection method to perform transformation detection on the remote sensing image map. The multi-feature hierarchical CVA change detection method comprises the steps of firstly realizing multi-scale image segmentation by using an image segmentation method based on an area adjacency graph, carrying out random forest feature selection on features extracted under each scale, and carrying out hierarchical CVA change detection by using an optimized feature vector to obtain a binary change detection result, wherein in the binary change detection result, a black part represents a change area, a white part represents an unchanged area, and a change area set Z is obtained d
Preferably, the sea-land segmentation is performed on two remote sensing images based on the improved U-Net network model to obtain sea-land segmentation results, including:
acquiring an open source remote sensing image, and performing image preprocessing on the open source remote sensing image to obtain a preprocessed image;
interpreting the preprocessed image by using arcmap software to obtain a binary segmentation sample diagram; green pixels in the binary segmentation sample graph represent land and blue pixels represent sea;
determining a second training set, a second testing set and a verification set according to the binary segmentation sample diagram;
training, testing and verifying an initial segmentation model according to the second training set, the second testing set and the verification set to obtain the trained improved U-Net network model;
inputting the two remote sensing images into the improved U-Net network model, and outputting to obtain a sea-land segmentation result diagram; the sea-land segmentation result graph comprises two remote sensing images with dividing lines and a dividing line set.
Optionally, the fourth procedure in this embodiment is to implement the sea Liu Fenge for two remote sensing images based on the improved U-Net network model. In order to better preserve boundary information and have better semantic segmentation effect, an improved U-Net model is adopted to realize sea Liu Fenge on the remote sensing image. The improved U-Net network model structure can improve the maximum trainable depth of the network to a certain extent, strengthen the learning of the target edge information of the model and improve the accuracy of the target segmentation edge.
The input layer of the improved U-Net network model is changed from an original single channel to multiple channels, and the multi-channel information characteristics of remote sensing image data can be learned. The encoder consists of 5 repeated processing blocks, each block comprising two consecutive convolutions of kernel size 3 x 3, step size 1. The blocks are connected through a 2 x 2 core and a 2 step size max pooling operation. The decoder is structurally symmetrical to the encoder and is also composed of 5 repeated processing blocks. The convolution operation of the decoder is consistent with that of the encoder, and the block is convolved by a transpose with a core of 4×4 and a step size of 2. Transposed convolution can allow the network to learn more features than bilinear interpolation used by conventional U-Net networks. And a batch normalization layer is added after each convolution operation, so that the characteristics output by the convolution layer can be normalized to normal distribution, the obtained result is input into a linear rectification function, and the nonlinear expression capacity of the model is improved.
Training an improved U-Net network model:
and acquiring an open source remote sensing image, preprocessing, and then translating the image into a binary segmentation map by using arcmap software, wherein green pixels represent land and blue pixels represent sea. Through the steps, the training set, the testing set and the verification set are as follows: 1:1, wherein the binary image labels corresponding to each sample are contained in the random division.
Training an improved U-Net network model on a high-resolution remote sensing data set, taking two remote sensing images as input of the improved U-Net network model after training the model, and outputting to obtain a sea-land segmentation result graph to obtain two remote sensing images with boundaries and a boundary set L 1 ={(x i ,y i )i≤N 1 Sum L 2 ={(x j ,y j )j≤N 2 }。
Preferably, the performing secondary change detection according to the information set, the change region set and the sea-land segmentation result to obtain a final change detection result includes:
according to the two information sets, performing contrast analysis on corresponding target objects in the two remote sensing images to obtain an analysis result; the analysis result comprises that the same position detects the target object, the types are consistent, the same position detects the target object, the types are inconsistent, and the same position detects only one target object;
determining the positions of the target objects which are detected by the same position and have inconsistent types and the target object which is detected by the same position and only one target object is a change area, and obtaining a target object position set with large difference;
performing difference processing on the two remote sensing images with the dividing lines to obtain a difference image;
obtaining a position set with large boundary difference according to the positions with the difference in the difference images;
respectively finding deflection positions in the position sets with large boundary differences in the two remote sensing images with the boundary;
judging whether the deviation position deviates to the sea area and whether the deviation position deviates to the land according to the sequence of the remote sensing image generation time by combining the dividing line to obtain a judging result;
analyzing whether the position corresponding to the difference is reduced in the land area or the sea area according to the judging result;
performing grid division on the remote sensing image, and setting a threshold value;
comparing each target object position in the target object position set with the corresponding region in the change region set, and finding a change region comprehensive set with the overlapping ratio larger than the threshold value; the comprehensive change area set comprises a target object position subset and a change area subset;
comparing the position set with large boundary difference with the change area set in a cooperative manner to find a change position comprehensive set with the overlap ratio larger than the threshold value; the change position comprehensive set comprises a difference position subset and a change position subset;
converting the difference position subset and the change position subset from the point set to a set in a frame form respectively to obtain a first conversion set and a second conversion set;
combining the target object position subset with the first conversion set to obtain a first combined set;
combining the change region subset with the second conversion set to obtain a second combined set;
labeling the change areas with the coincidence ratio smaller than the threshold value in the two remote sensing images in the form of identification frames;
carrying out matting processing on the part of the area where the identification frame is positioned by using a matting technology so as to realize separation of the identification frame and the background, thereby obtaining a new remote sensing image only comprising the identification frame area;
performing difference on the two new remote sensing images to obtain an identification area set;
and merging the first merging set, the second merging set and the identification area set to obtain the final change detection result.
Preferably, the calculation formula of the coincidence ratio is:
wherein Ω i For the overlap ratio of the ith variation region, M i For the number of meshes with the ith variation region overlapping, O d For the target object position set with large difference, L d Z is the position set with large difference of the dividing line d For the set of change regions, a i Is O d Or L d The number of meshes occupied by the ith variation region, b i Is Z d Middle and O d Or L d The number of meshes occupied by the corresponding area, len () is the length of the calculation set, and Q is the total number of change areas.
Preferably, the change region in which the coincidence ratio is smaller than the threshold value includes:
the set of differences between the set of widely-differing target object positions and the subset of target object positions, the set of differences between the set of widely-differing positions and the subset of change regions, and the set of differences between the set of change regions and the second merged set.
Preferably, the comparing each target object position in the target object position set with the corresponding region in the change region set to find a change region comprehensive set with a overlap ratio greater than the threshold value includes:
calculating and recording the grid number occupied by each target object in the target object position set with large difference and the grid number occupied by the corresponding area in the change area set, comparing the grid occupied by each target object with the grid occupied by the corresponding position in the change area set, and calculating the first grid number overlapped by each change area between the grid number and the grid number;
calculating the coincidence ratio of each change area according to the first grid number;
and screening out a change region with the overlapping ratio larger than a threshold value from the target object position set and the change region set with large difference to obtain the target object position subset and the change region subset.
Preferably, the step of comparing the set of positions with large differences in the dividing line with the set of change regions to find a comprehensive set of change positions with a combination ratio greater than the threshold value includes:
dividing the remote sensing image into a plurality of areas with the same size, calculating and recording the grid number occupied by the position set with large boundary difference in each area and the grid number occupied by the change area set in each area, comparing the same areas, and calculating the second grid number overlapped in the same areas;
calculating the coincidence ratio of each region according to the second grid number;
and screening out the region with the overlapping ratio larger than a threshold value from the position set with large boundary difference and the change region set, and reserving overlapped grid positions to obtain the difference position subset and the change position subset.
As an optional implementation manner, the fifth process in this embodiment is to implement comparative analysis of two remote sensing images in the same area and in different periods, and specifically includes:
scheme 5.1: and (5) comparing and analyzing the target objects.
Information set O of all target objects obtained according to flow 1 1 And O 2 Performing contrast analysis on corresponding target objects in the two remote sensing images, wherein the following three conditions can occur in the process of the contrast analysis:
(1) The target object is detected at the same position and the types are consistent.
(2) The target object is detected at the same position, but the types are inconsistent.
(3) Only one target object is detected at the same location.
The first case indicates no or no significant change in the same location, the second case indicates a more significant change in the same location, and the third case indicates a significant change in the same location. The second and third cases are used as change areasObtaining a target object position set O with large difference d ={(top k ,left k ,right k ,bottom k ) And k is less than or equal to P, wherein P is the total number of all the target objects with large difference.
Scheme 5.2: and (5) comparing and analyzing the sea-land boundary lines.
Performing difference processing on the two remote sensing images with the dividing lines obtained in the process 4 to obtain a difference image, and recording the positions with differences in the difference image to obtain a position set L with large dividing line differences d ={(x s ,y s ) S is less than or equal to W, wherein W is the number of positions with large boundary line difference; and respectively finding out the positions with large differences of the dividing lines in the two remote sensing images with the dividing lines, and judging whether the remote sensing images are deviated to the sea area or the land according to the sequence of the generating time of the remote sensing images by combining the dividing lines, so that whether the positions corresponding to the differences are reduced in the land area or the sea area is analyzed.
Scheme 5.3: and detecting the change of the multi-view fused remote sensing image.
(1) Dividing the remote sensing image into n multiplied by m grids, setting a threshold value H, and calculating the coincidence ratio as follows:
wherein Ω i For the overlap ratio of the ith variation region, M i A is the mesh number of the overlap of the ith variation area i For the change region set O d Or a set of change regions L d The number of meshes occupied by the ith variation region, b i For the change zone set Z d Middle and O d Or L d The number of meshes occupied by the corresponding area, len () is the length of the calculation set, and Q is the total number of change areas.
(2) To the target object position set O with large difference d Each target object of (a)Set of location and change regions Z d The corresponding areas in the map are compared, and a position area with larger overlapping area is found, and the specific steps are as follows:
calculating and recording a target object position set O with large difference d The number of grids occupied by each target object and the change area set Z d The grid number occupied by the corresponding region in the set of the change regions is compared with the grid number occupied by the corresponding position in the set of the change regions, and the grid number overlapped by each change region between the grid number and the change region is calculated, so that the overlapping ratio of each change region is calculated; in a target object position set O with large difference d And a change region set Z d The change area with the overlapping ratio larger than the threshold value H is screened out to obtain a change area set O d1 And Z d1
(3) The position set L with large boundary difference d And change area set Z d By comparison, a location area with larger overlapping is found, and the specific steps are as follows:
dividing the remote sensing map into a plurality of areas with the same size, and calculating and recording a position set L with large boundary line difference d The number of grids occupied in each region, and the change region set Z d Comparing the grid number occupied by each region with the grid number occupied by the same region, and calculating the grid number overlapped in the same region, so as to calculate the overlapping ratio of each region; set L of locations with large differences in dividing line d And a change zone Z d Selecting the region with the overlapping ratio larger than the threshold value H, and reserving the overlapped grid positions to obtain a change position set L d2 And Z d2
(4) All the location areas with a relatively large overlap are merged. Will change the position set L d2 And Z d2 Conversion from a set of points to a set L 'in the form of a box' d2 And Z' d2 For set O d1 、Z d1 、L′ d2 And Z' d2 The following operations are performed:
O=O d1 ∪L′ d2 ,Z=Z d1 ∪Z′ d2
(5) When the coincidence ratio is smaller than H, a condition of missing detection may occur, and therefore, in this case, it is necessary to perform secondary detection on a change region that meets the coincidence ratio smaller than H, missing detection is reduced, and accuracy of the change detection is improved, and specific steps are as follows:
the change region with the overlap ratio smaller than H has O d -O d1 、L d -L d2 And Z d And Z, marking a change region with the superposition ratio omega smaller than H in the two remote sensing images in the form of an identification frame, carrying out image matting processing on the part of the region where the identification frame is positioned by utilizing a matting technology, and separating the identification frame from the background, thereby obtaining a new remote sensing image only comprising the identification frame region, and carrying out difference on the two new remote sensing images to obtain a change region set R.
And finally, the set O and the set Z are fused change areas, the set R is a missed change area, and the three sets, namely all the change areas, are combined to obtain a final change detection result.
The beneficial effects of the invention are as follows:
the invention realizes the change detection of the remote sensing image from the whole and local angles, reduces the missing detection information of the change detection result, and effectively improves the accuracy of the change detection.
In the process of target detection, the invention not only can identify the target, but also can take the accurately identified target as the input of the target detection model, continuously optimize the model, realize the self-adaptive optimization of the model, and improve the accuracy of target detection model identification.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other.
The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present invention and the core ideas thereof; also, it is within the scope of the present invention to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the invention.

Claims (7)

1. A remote sensing image contrast analysis method based on multi-view fusion is characterized by comprising the following steps:
respectively carrying out target detection on all target objects in two remote sensing images in different time periods in the same region by using the optimized target detection model to obtain information sets of all target objects respectively corresponding to the two remote sensing images;
performing primary transformation detection on the remote sensing image by using a multi-feature hierarchical CVA change detection method to obtain a change region set;
based on an improved U-Net network model, sea-land segmentation is carried out on the two remote sensing images, and sea-land segmentation results are obtained;
performing secondary change detection according to the information set, the change area set and the sea-land segmentation result to obtain a final change detection result;
the sea-land segmentation is carried out on two remote sensing images based on the improved U-Net network model to obtain sea-land segmentation results, and the method comprises the following steps:
acquiring an open source remote sensing image, and performing image preprocessing on the open source remote sensing image to obtain a preprocessed image;
interpreting the preprocessed image by using arcmap software to obtain a binary segmentation sample diagram; green pixels in the binary segmentation sample graph represent land and blue pixels represent sea;
determining a second training set, a second testing set and a verification set according to the binary segmentation sample diagram;
training, testing and verifying an initial segmentation model according to the second training set, the second testing set and the verification set to obtain the trained improved U-Net network model;
inputting the two remote sensing images into the improved U-Net network model, and outputting to obtain a sea-land segmentation result diagram; the sea-land segmentation result graph comprises two remote sensing images with dividing lines and a dividing line set;
performing secondary change detection according to the information set, the change area set and the sea-land segmentation result to obtain a final change detection result, wherein the secondary change detection comprises the following steps:
according to the two information sets, performing contrast analysis on corresponding target objects in the two remote sensing images to obtain an analysis result; the analysis result comprises that the same position detects the target object, the types are consistent, the same position detects the target object, the types are inconsistent, and the same position detects only one target object;
determining the positions of the target objects which are detected by the same position and have inconsistent types and the target object which is detected by the same position and only one target object is a change area, and obtaining a target object position set with large difference;
performing difference processing on the two remote sensing images with the dividing lines to obtain a difference image;
obtaining a position set with large boundary difference according to the positions with the difference in the difference images;
respectively finding deflection positions in the position sets with large boundary differences in the two remote sensing images with the boundary;
judging whether the deviation position deviates to the sea area and whether the deviation position deviates to the land according to the sequence of the remote sensing image generation time by combining the dividing line to obtain a judging result;
analyzing whether the position corresponding to the difference is reduced in the land area or the sea area according to the judging result;
performing grid division on the remote sensing image, and setting a threshold value;
comparing each target object position in the target object position set with the corresponding region in the change region set, and finding a change region comprehensive set with the overlapping ratio larger than the threshold value; the comprehensive change area set comprises a target object position subset and a change area subset;
comparing the position set with large boundary difference with the change area set in a cooperative manner to find a change position comprehensive set with the overlap ratio larger than the threshold value; the change position comprehensive set comprises a difference position subset and a change position subset;
converting the difference position subset and the change position subset from the point set to a set in a frame form respectively to obtain a first conversion set and a second conversion set;
combining the target object position subset with the first conversion set to obtain a first combined set;
combining the change region subset with the second conversion set to obtain a second combined set;
labeling the change areas with the coincidence ratio smaller than the threshold value in the two remote sensing images in the form of identification frames;
carrying out matting processing on the part of the area where the identification frame is positioned by using a matting technology so as to realize separation of the identification frame and the background, thereby obtaining a new remote sensing image only comprising the identification frame area;
performing difference on the two new remote sensing images to obtain an identification area set;
and merging the first merging set, the second merging set and the identification area set to obtain the final change detection result.
2. The remote sensing image contrast analysis method based on multi-view fusion according to claim 1, wherein the optimization method of the target detection model comprises the following steps:
collecting an open-source remote sensing image data set, and marking a remote sensing image graph in the remote sensing image data set by using a marking tool to form a data set with an identification frame;
randomly dividing the data set with the identification frame into a first training set and a first test set;
based on a random gradient descent SGD method, inputting the first training set into an original detection model for training, and testing the trained initial model according to the first testing set to obtain an initial model;
performing image preprocessing operation on two remote sensing images in the same region and at different time intervals, and inputting the remote sensing images subjected to the image preprocessing into the initial model for target detection to obtain an initial information set;
removing target object information except the target object information which is at the same position and has the same type in the initial information set in the two remote sensing images after target detection to obtain a new remote sensing image;
and inputting the new remote sensing image into the initial model to perform a multi-round training optimization process to obtain the optimized target detection model.
3. The method for comparing and analyzing remote sensing images based on multi-view fusion according to claim 1, wherein the performing a transformation detection on the remote sensing image map to obtain a change region set by using a multi-feature hierarchical CVA change detection method comprises:
performing multi-scale image segmentation on the remote sensing image based on an image segmentation method of the region adjacency graph so as to extract features under each scale;
random forest feature selection is carried out on the features extracted under each scale, and hierarchical CVA change detection is carried out by utilizing the optimized feature vectors, so that a binary change detection result is obtained;
and determining the change region set according to the binary change detection result.
4. The remote sensing image contrast analysis method based on multi-view fusion according to claim 1, wherein the calculation formula of the coincidence ratio is as follows:
wherein Ω i Said coincidence for the ith variation regionRatio of M i For the number of meshes with the ith variation region overlapping, O d For the target object position set with large difference, L d Z is the position set with large difference of the dividing line d For the set of change regions, a i Is O d Or L d The number of meshes occupied by the ith variation region, b i Is Z d Middle and O d Or L d The number of meshes occupied by the corresponding area, len () is the length of the calculation set, and Q is the total number of change areas.
5. The multi-view fusion-based remote sensing image contrast analysis method according to claim 1, wherein the change region in which the coincidence ratio is smaller than the threshold value comprises:
the set of differences between the set of widely-differing target object positions and the subset of target object positions, the set of differences between the set of widely-differing positions and the subset of change regions, and the set of differences between the set of change regions and the second merged set.
6. The remote sensing image contrast analysis method based on multi-view fusion according to claim 1, wherein the comparing each target object position in the target object position set with the corresponding region in the change region set to find a change region comprehensive set with a overlap ratio greater than the threshold value comprises:
calculating and recording the grid number occupied by each target object in the target object position set with large difference and the grid number occupied by the corresponding area in the change area set, comparing the grid occupied by each target object with the grid occupied by the corresponding position in the change area set, and calculating the first grid number overlapped by each change area between the grid number and the grid number;
calculating the coincidence ratio of each change area according to the first grid number;
and screening out a change region with the overlapping ratio larger than a threshold value from the target object position set and the change region set with large difference to obtain the target object position subset and the change region subset.
7. The method for comparing and analyzing remote sensing images based on multi-view fusion according to claim 1, wherein the step of comparing the set of positions with large differences between the dividing lines with the set of change regions to find a comprehensive set of change positions with a combination ratio greater than the threshold value comprises the steps of:
dividing the remote sensing image into a plurality of areas with the same size, calculating and recording the grid number occupied by the position set with large boundary difference in each area and the grid number occupied by the change area set in each area, comparing the same areas, and calculating the second grid number overlapped in the same areas;
calculating the coincidence ratio of each region according to the second grid number;
and screening out the region with the overlapping ratio larger than a threshold value from the position set with large boundary difference and the change region set, and reserving overlapped grid positions to obtain the difference position subset and the change position subset.
CN202211518990.9A 2022-11-30 2022-11-30 Remote sensing image comparison analysis method based on multi-view fusion Active CN115830448B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211518990.9A CN115830448B (en) 2022-11-30 2022-11-30 Remote sensing image comparison analysis method based on multi-view fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211518990.9A CN115830448B (en) 2022-11-30 2022-11-30 Remote sensing image comparison analysis method based on multi-view fusion

Publications (2)

Publication Number Publication Date
CN115830448A CN115830448A (en) 2023-03-21
CN115830448B true CN115830448B (en) 2024-02-09

Family

ID=85533047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211518990.9A Active CN115830448B (en) 2022-11-30 2022-11-30 Remote sensing image comparison analysis method based on multi-view fusion

Country Status (1)

Country Link
CN (1) CN115830448B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005346664A (en) * 2004-06-07 2005-12-15 Nogiwa Sangyo Kk Coastline extraction method and coastline extraction system
KR101488214B1 (en) * 2014-07-11 2015-01-30 (주)지오시스템리서치 Apparatus for monitoring geographical features change of intertidal zone using image pictured by camera and the method thereof
CN108376247A (en) * 2018-02-05 2018-08-07 北方工业大学 Strategic coarse-fine combined sea-land separation method applied to optical remote sensing ship detection
CN110264672A (en) * 2019-06-25 2019-09-20 广州市城市规划勘测设计研究院 A kind of early warning system of geological disaster
CN110853026A (en) * 2019-11-16 2020-02-28 四创科技有限公司 Remote sensing image change detection method integrating deep learning and region segmentation
CN110866926A (en) * 2019-10-24 2020-03-06 北京航空航天大学 Infrared remote sensing image rapid and fine sea-land segmentation method
CN112101159A (en) * 2020-09-04 2020-12-18 国家林业和草原局中南调查规划设计院 Multi-temporal forest remote sensing image change monitoring method
CN113537023A (en) * 2021-07-08 2021-10-22 中国人民解放军战略支援部队信息工程大学 Method for detecting semantic change of remote sensing image
CN113628227A (en) * 2021-08-02 2021-11-09 哈尔滨工业大学 Coastline change analysis method based on deep learning
CN113988271A (en) * 2021-11-08 2022-01-28 北京市测绘设计研究院 Method, device and equipment for detecting high-resolution remote sensing image change
CN114255375A (en) * 2020-09-10 2022-03-29 北京搜狗科技发展有限公司 Image data processing method and device and electronic equipment
CN115115819A (en) * 2022-06-14 2022-09-27 青岛理工大学 Image multi-view semantic change detection network and method for assembly sequence monitoring
CN115331087A (en) * 2022-10-11 2022-11-11 水利部交通运输部国家能源局南京水利科学研究院 Remote sensing image change detection method and system fusing regional semantics and pixel characteristics

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005346664A (en) * 2004-06-07 2005-12-15 Nogiwa Sangyo Kk Coastline extraction method and coastline extraction system
KR101488214B1 (en) * 2014-07-11 2015-01-30 (주)지오시스템리서치 Apparatus for monitoring geographical features change of intertidal zone using image pictured by camera and the method thereof
CN108376247A (en) * 2018-02-05 2018-08-07 北方工业大学 Strategic coarse-fine combined sea-land separation method applied to optical remote sensing ship detection
CN110264672A (en) * 2019-06-25 2019-09-20 广州市城市规划勘测设计研究院 A kind of early warning system of geological disaster
CN110866926A (en) * 2019-10-24 2020-03-06 北京航空航天大学 Infrared remote sensing image rapid and fine sea-land segmentation method
CN110853026A (en) * 2019-11-16 2020-02-28 四创科技有限公司 Remote sensing image change detection method integrating deep learning and region segmentation
CN112101159A (en) * 2020-09-04 2020-12-18 国家林业和草原局中南调查规划设计院 Multi-temporal forest remote sensing image change monitoring method
CN114255375A (en) * 2020-09-10 2022-03-29 北京搜狗科技发展有限公司 Image data processing method and device and electronic equipment
CN113537023A (en) * 2021-07-08 2021-10-22 中国人民解放军战略支援部队信息工程大学 Method for detecting semantic change of remote sensing image
CN113628227A (en) * 2021-08-02 2021-11-09 哈尔滨工业大学 Coastline change analysis method based on deep learning
CN113988271A (en) * 2021-11-08 2022-01-28 北京市测绘设计研究院 Method, device and equipment for detecting high-resolution remote sensing image change
CN115115819A (en) * 2022-06-14 2022-09-27 青岛理工大学 Image multi-view semantic change detection network and method for assembly sequence monitoring
CN115331087A (en) * 2022-10-11 2022-11-11 水利部交通运输部国家能源局南京水利科学研究院 Remote sensing image change detection method and system fusing regional semantics and pixel characteristics

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Automated Extraction of Annual Erosion Rates for Arctic Permafrost Coasts Using Sentinel-1, Deep Learning, and Change Vector Analysis;Marius Philipp et al.;《remote sensing》;第1-26页 *
Object-Based Change Detection Using Multiple Classifiers and Multi-Scale Uncertainty Analysis;Kun Tan et al.;《remote sensing》;第1-17页 *

Also Published As

Publication number Publication date
CN115830448A (en) 2023-03-21

Similar Documents

Publication Publication Date Title
CN110287960B (en) Method for detecting and identifying curve characters in natural scene image
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
Tao et al. Deep learning for unsupervised anomaly localization in industrial images: A survey
CN113674247B (en) X-ray weld defect detection method based on convolutional neural network
CN112966684B (en) Cooperative learning character recognition method under attention mechanism
CN107330453B (en) Pornographic image identification method based on step-by-step identification and fusion key part detection
CN103186894B (en) A kind of multi-focus image fusing method of self-adaptation piecemeal
CN116843999B (en) Gas cylinder detection method in fire operation based on deep learning
CN113505670B (en) Remote sensing image weak supervision building extraction method based on multi-scale CAM and super-pixels
CN113487610B (en) Herpes image recognition method and device, computer equipment and storage medium
Branikas et al. A novel data augmentation method for improved visual crack detection using generative adversarial networks
CN110738132A (en) target detection quality blind evaluation method with discriminant perception capability
CN112308040A (en) River sewage outlet detection method and system based on high-definition images
CN113657414B (en) Object identification method
Zhang et al. Residual attentive feature learning network for salient object detection
CN103455798B (en) Histogrammic human body detecting method is flowed to based on maximum geometry
CN111079807B (en) Ground object classification method and device
CN115830448B (en) Remote sensing image comparison analysis method based on multi-view fusion
CN111612802A (en) Re-optimization training method based on existing image semantic segmentation model and application
CN114283431B (en) Text detection method based on differentiable binarization
CN110889418A (en) Gas contour identification method
CN111882545B (en) Fabric defect detection method based on bidirectional information transmission and feature fusion
García et al. A configuration approach for convolutional neural networks used for defect detection on surfaces
CN114596433A (en) Insulator identification method
Chen et al. RDUnet-A: A Deep Neural Network Method with Attention for Fabric Defect Segmentation Based on Autoencoder

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant