CN111179230A - Remote sensing image contrast change detection method and device, storage medium and electronic equipment - Google Patents
Remote sensing image contrast change detection method and device, storage medium and electronic equipment Download PDFInfo
- Publication number
- CN111179230A CN111179230A CN201911305648.9A CN201911305648A CN111179230A CN 111179230 A CN111179230 A CN 111179230A CN 201911305648 A CN201911305648 A CN 201911305648A CN 111179230 A CN111179230 A CN 111179230A
- Authority
- CN
- China
- Prior art keywords
- pixel
- remote sensing
- matrix
- target
- pixel point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Image Processing (AREA)
Abstract
The invention provides a method and a device for detecting contrast change of remote sensing images, a storage medium and electronic equipment, wherein the method comprises the following steps: extracting difference graphs of two remote sensing images of the same region to be compared in different time phases; calculating the related information of each pixel point in the difference image and other pixel points in the appointed neighborhood range thereof; and searching a target change region from the difference map according to the related information of each pixel point in the difference map and other pixel points in the appointed neighborhood range of the pixel point, and optimizing the target change region. According to the method, the target change area can be searched from the difference map according to the related information of each pixel point in the difference map and other pixel points in the appointed neighborhood range of the pixel point, so that the change area can be efficiently detected from two remote sensing images, and the generalization capability of the change detection method is improved.
Description
Technical Field
The invention relates to the technical field of remote sensing image processing, in particular to a method and a device for detecting contrast change of a remote sensing image, a storage medium and electronic equipment.
Background
The contrast change of the remote sensing image refers to changes displayed in remote sensing pictures of the same area at different time points, and the changes comprise building increase, vegetation coverage, land development and utilization and the like. The change detection is to find out the position changed along with the time by processing and comparing the pictures, and identify the pixel point corresponding to the changed area.
Change Detection (CD) plays a very important role in accurately understanding surface changes in the use of existing remote sensing image data. The remote sensing image has the characteristics of easy acquisition and short updating period, so that the change area can be accurately and timely detected from the remote sensing image, and the method has important significance for various works such as city management and planning, urbanization evaluation, post-disaster reconstruction and the like. And the newly-built buildings in the detected areas are combined with the areas where the buildings are located, so that the illegal buildings can be effectively detected, and the urban standardized management is facilitated. Analyzing the change density in the remote sensing image facilitates understanding of the development speed of each area and the density area of building changes, which is helpful for estimating the development process of the area and evaluating the development degree of the area. By positioning the area with severe change, whether the area suffering from natural disasters such as earthquake is rebuilt according to a planning mode can be judged, and the process of post-disaster reconstruction is supervised.
In order to obtain the change condition and position of the image, the related industries such as surveying and mapping manually compare two remote sensing images by using tools such as ARCGIS and the like, and mark the changed area. Because the buildings are densely distributed, the changes in each area need to be compared carefully, and meanwhile, because the remote sensing image has the characteristics of low contrast, dense urban buildings, large area range to be detected and the like, a great deal of time and energy are consumed for manually marking the changes in the remote sensing image. Therefore, how to efficiently mark a change area from two remote sensing images, thereby reducing the labor consumption and improving the generalization capability is a problem to be solved.
Disclosure of Invention
The invention provides a remote sensing image contrast change detection method, a remote sensing image contrast change detection device, a storage medium and electronic equipment, and solves the problems that in the prior art, a change area is difficult to efficiently detect from two remote sensing images, and the existing change detection method is poor in generalization capability.
In one aspect of the invention, a method for detecting contrast change of remote sensing image is provided,
extracting difference graphs of two remote sensing images of the same region to be compared in different time phases;
calculating the related information of each pixel point in the difference image and other pixel points in the appointed neighborhood range thereof;
and searching a target change region from the difference map according to the related information of each pixel point in the difference map and other pixel points in the appointed neighborhood range of the pixel point, and optimizing the target change region.
Optionally, the extracting a difference map of two remote sensing images of the same region to be compared in different time phases includes:
respectively extracting characteristic values of R, G, B three channels of the two remote sensing images;
respectively generating a gray level picture corresponding to each remote sensing image according to the mean value of the characteristic values of R, G, B channels corresponding to each pixel point in the two remote sensing images;
and respectively calculating the difference value of pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference image of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
Optionally, the extracting a difference map of two remote sensing images of the same region to be compared in different time phases includes:
respectively extracting characteristic values of R, G, B three channels of the two remote sensing images;
respectively carrying out histogram statistics on characteristic values of R, G, B three channels of each remote sensing image;
calculating the histogram distance of the three histograms corresponding to each remote sensing image;
determining R, G, B weight values occupied by characteristic values of three channels when corresponding gray level pictures are generated according to the size relation of histogram distances of three histograms corresponding to each remote sensing image, and generating the gray level picture corresponding to the current remote sensing image according to the determined weight values;
and respectively calculating the difference value of pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference image of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
Optionally, the extracting a difference map of two remote sensing images of the same region to be compared in different time phases includes:
acquiring target color characteristics of an object to be detected;
respectively extracting characteristic values of R, G, B three channels of the two remote sensing images;
calculating the difference value of the characteristic value between the color channel corresponding to the target color characteristic and other channels of each pixel point in each remote sensing image;
adjusting the characteristic value of a color channel corresponding to the target color characteristic of each pixel point in each remote sensing image and the weight value occupied by the difference value of the characteristic value between the color channel and other channels when generating a gray level picture, and generating the gray level picture corresponding to the current remote sensing image according to the adjusted weight value;
and respectively calculating the difference value of pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference image of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
Optionally, the calculating the relevant information between each pixel point in the disparity map and other pixel points in the neighborhood range thereof includes:
dividing a neighborhood range for each pixel point in the disparity map;
generating pixel vectors of pixel points corresponding to the current neighborhood range according to the pixel information of each pixel point in each neighborhood range;
generating a target matrix according to the pixel vector of each pixel point in the difference map;
and performing sparsification treatment on the target matrix, and reflecting the relevant information of each pixel point and other pixel points in the neighborhood range of the pixel point in the disparity map by adopting the matrix after the sparsification treatment.
Optionally, the performing sparsification on the target matrix includes:
selecting target pixel points for change detection from the difference graph according to a preset rule;
generating a source matrix according to the pixel vectors corresponding to the target pixel points;
calculating a transformation matrix of a covariance matrix of the source matrix;
and transforming the target matrix according to the transformation matrix.
Optionally, after the transforming the target matrix according to the transformation matrix, the method further includes:
performing sparsification processing on the matrix after the transformation processing;
the sparse processing of the matrix after the transformation processing includes:
searching pixel information which is smaller than a preset threshold value and exists in the matrix after the transformation processing;
the found pixel information is set to 0.
Optionally, the selecting a target pixel point for change detection from the difference map according to a preset rule includes:
selecting a target pixel point in the same pixel row of the difference map at intervals of the same number of pixel columns, and selecting a target pixel point in the same pixel row at intervals of the same number of pixel rows; or
And dividing the difference image into a plurality of pixel units according to a preset division standard, and selecting pixel points positioned at the same distribution position in each pixel unit as target pixel points.
Optionally, the calculating a transformation matrix of the covariance matrix of the source matrix includes:
performing zero-averaging processing on each row in the source matrix;
calculating a covariance matrix of the matrix after zero-mean processing, and calculating an eigenvalue and an eigenvector of the covariance matrix;
and sequentially arranging the eigenvectors from top to bottom according to the magnitude sequence of the eigenvalues to obtain the transformation matrix.
Optionally, the searching for the target change region from the disparity map according to the related information of each pixel point in the disparity map and other pixel points in the designated neighborhood range thereof, and performing optimization processing on the target change region includes:
classifying the pixel vectors of all the pixel points in the disparity map according to the relevant information of all the pixel points in the disparity map and other pixel points in the appointed neighborhood range of the pixel points;
searching a target classification category with the least pixel vectors in all classification categories;
taking pixel points corresponding to all pixel vectors in the target classification category as target change areas;
and carrying out image optimization processing on the target change area.
Optionally, the classifying the pixel vector of each pixel point in the disparity map according to the related information of each pixel point in the disparity map and other pixel points in the specified neighborhood range thereof includes:
configuring the number of categories of the classification categories;
calculating the distance between the pixel vector of the current pixel point and the pixel vectors of other pixel points in the difference map;
if the calculated minimum distance is greater than the maximum distance between any two category points, the pixel vector of the current pixel point is divided into a new category, and the two corresponding categories are merged;
otherwise, the pixel vector of the current pixel point is classified into the classification category to which the pixel vector with the minimum distance belongs.
Optionally, the performing image optimization processing on the target change region includes:
scanning pixel points in the target change area in sequence by adopting a scanning unit with a preset size;
performing convolution operation on the pixel points in the scanning unit by using a first preset matrix, and removing the pixel points currently positioned in the central position of the scanning unit if the convolution result is less than or equal to a preset first threshold value;
and performing convolution operation on the pixel points in the scanning unit by using a second preset matrix, and if the convolution result is smaller than a preset second threshold, performing pixel information resetting on all the pixel points in the scanning unit according to the pixel point currently located at the central position of the scanning unit.
In another aspect of the present invention, there is provided a remote sensing image contrast change detection apparatus, including:
the extraction module is used for extracting difference graphs of two remote sensing images of the same region to be compared in different time phases;
the calculation module is used for calculating the related information of each pixel point in the difference map and other pixel points in the appointed neighborhood range;
and the processing module is used for searching a target change region from the difference map according to the related information of each pixel point in the difference map and other pixel points in the appointed neighborhood range of the pixel point, and optimizing the target change region.
Optionally, the extraction module is specifically configured to extract feature values of R, G, B three channels of the two remote sensing images respectively; respectively generating a gray level picture corresponding to each remote sensing image according to the mean value of the characteristic values of R, G, B channels corresponding to each pixel point in the two remote sensing images; and respectively calculating the difference value of pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference image of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
Optionally, the extraction module is specifically configured to extract feature values of R, G, B three channels of the two remote sensing images respectively; respectively carrying out histogram statistics on characteristic values of R, G, B three channels of each remote sensing image; calculating the histogram distance of the three histograms corresponding to each remote sensing image; determining R, G, B weight values occupied by characteristic values of three channels when corresponding gray level pictures are generated according to the size relation of histogram distances of three histograms corresponding to each remote sensing image, and generating the gray level picture corresponding to the current remote sensing image according to the determined weight values; and respectively calculating the difference value of pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference image of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
Optionally, the extracting module is specifically configured to obtain a target color feature of the object to be detected; respectively extracting characteristic values of R, G, B three channels of the two remote sensing images; calculating the difference value of the characteristic value between the color channel corresponding to the target color characteristic and other channels of each pixel point in each remote sensing image; adjusting the characteristic value of a color channel corresponding to the target color characteristic of each pixel point in each remote sensing image and the weight value occupied by the difference value of the characteristic value between the color channel and other channels when generating a gray level picture, and generating the gray level picture corresponding to the current remote sensing image according to the adjusted weight value; and respectively calculating the difference value of pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference image of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
Optionally, the calculation module includes:
the configuration unit is used for dividing a neighborhood range for each pixel point in the disparity map;
the first generation unit is used for generating pixel vectors of pixel points corresponding to the current neighborhood range according to the pixel information of each pixel point in each neighborhood range;
the second generating unit is used for generating a target matrix according to the pixel vector of each pixel point in the difference map;
and the calculation unit is used for performing sparsification processing on the target matrix, and reflecting the relevant information of each pixel point in the disparity map and other pixel points in the neighborhood range by adopting the matrix after the sparsification processing.
Optionally, the calculation unit is specifically configured to select a target pixel point for change detection from the difference map according to a preset rule; generating a source matrix according to the pixel vectors corresponding to the target pixel points; calculating a transformation matrix of a covariance matrix of the source matrix; and transforming the target matrix according to the transformation matrix.
Optionally, the computing unit is further configured to search for pixel information smaller than a preset threshold value in the transformed matrix after the target matrix is transformed according to the transformed matrix; the found pixel information is set to 0.
Optionally, the calculation unit is specifically configured to select, in the same pixel row of the disparity map, one target pixel point every time every other than the same number of pixel columns, and select, in the same pixel row, one target pixel point every other than the same number of pixel rows; or dividing the difference image into a plurality of pixel units according to a preset division standard, and selecting pixel points positioned at the same distribution position in each pixel unit as target pixel points.
Optionally, the computing unit is specifically configured to perform zero-averaging processing on each row in the source matrix; calculating a covariance matrix of the matrix after zero-mean processing, and calculating an eigenvalue and an eigenvector of the covariance matrix; and sequentially arranging the eigenvectors from top to bottom according to the magnitude sequence of the eigenvalues to obtain the transformation matrix.
Optionally, the processing module includes:
the classification unit is used for classifying the pixel vectors of all the pixel points in the disparity map according to the relevant information of all the pixel points in the disparity map and other pixel points in the appointed neighborhood range;
the searching unit is used for searching a target classification category with the least pixel vectors in each classification category;
and the optimization processing unit is used for taking the pixel points corresponding to the pixel vectors in the target classification category as a target change area and carrying out image optimization processing on the target change area.
Optionally, the classifying unit is specifically configured to configure the number of classes of the classification class; calculating the distance between the pixel vector of the current pixel point and the pixel vectors of other pixel points in the difference map; if the calculated minimum distance is greater than the maximum distance between any two category points, the pixel vector of the current pixel point is divided into a new category, and the two corresponding categories are merged; otherwise, the pixel vector of the current pixel point is classified into the classification category to which the pixel vector with the minimum distance belongs.
Optionally, the optimization processing unit is specifically configured to scan the pixel points in the target change area in sequence by using a scanning unit with a preset size; performing convolution operation on the pixel points in the scanning unit by using a first preset matrix, and removing the pixel points currently positioned in the central position of the scanning unit if the convolution result is less than or equal to a preset first threshold value; and performing convolution operation on the pixel points in the scanning unit by using a second preset matrix, and if the convolution result is smaller than a preset second threshold, performing pixel information resetting on all the pixel points in the scanning unit according to the pixel point currently located at the central position of the scanning unit.
Furthermore, the invention also provides a computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method as described above.
Furthermore, the present invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method as described above when executing the program.
According to the remote sensing image contrast change detection method, the remote sensing image contrast change detection device, the storage medium and the electronic equipment, the target change area can be searched from the difference map according to the relevant information of each pixel point in the difference map and other pixel points in the appointed neighborhood range of the pixel point, the change area can be efficiently detected from the two remote sensing images, and the generalization capability of the change detection method is improved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a schematic flow chart of a method for detecting contrast change of remote sensing images according to an embodiment of the present invention;
fig. 2 is a schematic flow chart illustrating an implementation process of step S11 in the method for detecting contrast change of remote sensing image according to the embodiment of the present invention;
fig. 3 is a schematic flow chart illustrating an implementation process of step S12 in the method for detecting contrast change of remote sensing images according to the embodiment of the present invention;
fig. 4 is a schematic flow chart illustrating an implementation process of step S13 in the method for detecting contrast change of remote sensing images according to the embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a remote sensing image contrast change detection apparatus according to an embodiment of the present invention;
fig. 6 is a schematic diagram of an internal structure of a computing module in the remote sensing image contrast change detection apparatus according to an embodiment of the present invention;
fig. 7 is a schematic diagram of an internal structure of a processing module in the remote sensing image contrast change detection apparatus according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Fig. 1 schematically shows a flowchart of a remote sensing image contrast change detection method according to an embodiment of the present invention. Referring to fig. 1, the method for detecting contrast change of remote sensing images provided by the embodiment of the present invention specifically includes steps S11 to S13:
and S11, extracting difference graphs of the two remote sensing images of the same region to be compared in different time phases.
S12, calculating the related information of each pixel point in the difference image and other pixel points in the appointed neighborhood range.
S13, searching a target change region from the difference map according to the related information of each pixel point in the difference map and other pixel points in the appointed neighborhood range of the pixel point, and optimizing the target change region.
The remote sensing image contrast change detection method provided by the embodiment of the invention is suitable for change detection of complex objects in densely distributed urban remote sensing image building regions, can automatically search a target change region from the difference map according to the related information of each pixel point in the difference map and other pixel points in the appointed neighborhood range thereof, reduces the labor consumption, further accurately and efficiently detects the change region from two remote sensing images, and improves the generalization capability of the change detection method.
In the embodiment of the present invention, two remote sensing images with the same shooting location at different time need to be obtained, in this embodiment, the size of each picture is 1200 × 3, which is taken as an example to explain, and the shooting conditions of the two pictures are allowed to be not completely the same, that is, the difference between the color difference, the illumination angle and the building inclination angle may exist. The difference image of the image refers to a single-channel image obtained by two three-channel images. There are various methods for obtaining the difference map, as shown in fig. 2, including direct difference calculation, adjustment of picture channel weight calculation, and combination of pixel point channel difference calculation, and the method can be selected from the following methods according to the type of difference to be detected:
the method comprises the following steps: the difference calculation is performed directly. The least computationally expensive approach is to average the three channel values of each picture. By adopting the method, the calculation amount can be minimized, and all the changed positions in the image can be indiscriminately detected, but the difference of the numerical values of the two images at certain positions without change due to various reasons such as illumination, cloud layers and the like in the actual image is considered, so that great noise interference is brought to the difference image.
The specific implementation method comprises the following steps: respectively extracting characteristic values of R, G, B three channels of the two remote sensing images; respectively generating a gray level picture corresponding to each remote sensing image according to the mean value of the characteristic values of R, G, B channels corresponding to each pixel point in the two remote sensing images; and respectively calculating the difference value of pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference image of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
In this embodiment, characteristic values of R, G, B three channels of two remote sensing images are extracted respectively to obtain values of different channels of a color picture. For each color picture, three channels of the picture are separated (R, G, B), respectively.
And taking the average value of the values of the three channels of each image corresponding to each pixel point, and paying attention to the integral value of the obtained average value. The following formula is adopted:
Sk=αR+βG+γB
α=0.33,β=0.33,γ=0.33
and calculating the difference between the two gray level pictures, directly subtracting the two pixel points at the corresponding positions, and taking the absolute value of the result to obtain the required difference picture. Specifically, in S1 and S2 of the two pictures, the absolute value of the difference between S1 and S2 is calculated as:
S=|Sl-S2|
the second method comprises the following steps: and adjusting the picture channel weight calculation. The channel weights of the image refer to the proportion of the values of the three channels in the final result when forming the gray scale map. In order to solve the problem, the adopted method is to calculate the distance between corresponding histograms of color channels and determine the weight of a new channel during calculation, so that the interference caused by color difference and illumination can be reduced to a great extent.
The specific implementation method comprises the following steps: respectively extracting characteristic values of R, G, B three channels of the two remote sensing images; respectively carrying out histogram statistics on characteristic values of R, G, B three channels of each remote sensing image; calculating the histogram distance of the three histograms corresponding to each remote sensing image; determining R, G, B weight values occupied by characteristic values of three channels when corresponding gray level pictures are generated according to the size relation of histogram distances of three histograms corresponding to each remote sensing image, and generating the gray level picture corresponding to the current remote sensing image according to the determined weight values; and respectively calculating the difference value of pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference image of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
In this embodiment, the pixel values of each channel of each picture are counted, if the value range of the pixel points is an integer value of 0 to 255, the number of the pixel points with different values is counted, each gradient histogram has 256 columns in total, and 3 gradient histograms of three channels are calculated.
Specifically, the distance between three corresponding histograms can be calculated according to the following formula, and the calculation formula of the histogram distance is as follows:
where f [ j ] is the value of the jth histogram in a histogram.
The channel with the minimum distance is G, the weight of the channel is 0.2, the channel with the maximum distance is R, the weight of the channel is 0.5, and the interference caused by chromatic aberration and illumination is reduced.
According to the weight calculation, S1 and S2 of the two pictures are obtained, and the absolute value of the difference between S1 and S2 is calculated as a difference map.
The third method comprises the following steps: and calculating the difference between the pixel point channels. The proportion of the corresponding channel difference can be adjusted in consideration of the difference of the targets which need to be detected to change. And when the object to be detected has certain color characteristics, adjusting the weight according to the characteristics.
The specific implementation method comprises the following steps: acquiring target color characteristics of an object to be detected; respectively extracting characteristic values of R, G, B three channels of the two remote sensing images; calculating the difference value of the characteristic value between the color channel corresponding to the target color characteristic and other channels of each pixel point in each remote sensing image; adjusting the characteristic value of a color channel corresponding to the target color characteristic of each pixel point in each remote sensing image and the weight value occupied by the difference value of the characteristic value between the color channel and other channels when generating a gray level picture, and generating the gray level picture corresponding to the current remote sensing image according to the adjusted weight value; and respectively calculating the difference value of pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference image of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
In this embodiment, house changes are detected, and in the image, houses that normally change over an open field have red or blue roofs, while changes in green vegetation, the earth's surface are not required.
Calculating the difference value of the three channels of each picture, setting the value of the difference value smaller than 0 as 0, and respectively calculating R, G, B total 6 difference values among the three channels.
the vegetation and the water area present more obvious green, so points with large differences between the green channel (G) and the other two blue channels (B) and the red channel (R) in the pixel point do not need to be over-concerned, the weight of the difference between the R channel and the other channels is reduced, and the weight of the difference between the R channel and the other channels is increased, optionally, setting α to 0.4, β to 0.4, and gamma is set to 0.2.
And generating a difference map. From the weights determined above, the new grayscale map for each map is calculated to be of most interest for the red component according to the following formula, and the result can be written as:
S=α(R-B)+β(R-G)+γR
α+β+γ=1
in the embodiment of the invention, the difference map of the remote sensing image is obtained preliminarily according to the step S11, the preliminarily obtained difference map contains almost all tiny change areas and can not be used for directly generating a final change image, so that part of pixel points need to be sampled, and the pixel points are processed and changed by combining the neighborhood information of each pixel point. As shown in fig. 3, step S12 is to calculate the related information between each pixel point in the disparity map and other pixel points in the neighborhood range, and the specific steps are as follows:
and S121, dividing a neighborhood range for each pixel point in the difference map.
And S122, generating pixel vectors of the pixel points corresponding to the current neighborhood range according to the pixel information of each pixel point in each neighborhood range.
And S123, generating a target matrix according to the pixel vector of each pixel point in the difference map.
And S124, performing sparsification treatment on the target matrix, and reflecting relevant information of each pixel point in the disparity map and other pixel points in the neighborhood range by using the matrix after the sparsification treatment.
Wherein, the target matrix is sparsified, which specifically includes the steps not shown in the following figures:
s1241, selecting target pixel points for change detection from the difference image according to a preset rule. Specifically, in order to avoid causing a special result of a special graph, only a part of points are selected from one graph for calculation, a plurality of methods are selected, the principle of uniform distribution, no repetition and moderate number of the selected points are mainly ensured, the invention adopts a sampling mode of uniform random distribution to realize the selection of target pixel points, and the specific selection modes include but are not limited to the following two modes: selecting a target pixel point in the same pixel row of the difference map at intervals of the same number of pixel columns, and selecting a target pixel point in the same pixel row at intervals of the same number of pixel rows; and dividing the difference image into a plurality of pixel units according to a preset division standard, and selecting pixel points positioned at the same distribution position in each pixel unit as target pixel points.
S1242, generating a source matrix according to the pixel vectors corresponding to the target pixel points.
S1243, calculating a transformation matrix of the covariance matrix of the source matrix. In this embodiment, in order to implement linear transformation of a target matrix, the present invention implements transformation processing on the target matrix by calculating a transformation matrix corresponding to a source matrix, where the calculation method of the transformation matrix includes: performing zero-averaging processing on each row in the source matrix; calculating a covariance matrix of the matrix after zero-mean processing, and calculating an eigenvalue and an eigenvector of the covariance matrix; and sequentially arranging the eigenvectors from top to bottom according to the magnitude sequence of the eigenvalues to obtain the transformation matrix.
And S1244, transforming the target matrix according to the transformation matrix.
Further, after the transforming the target matrix according to the transformation matrix, the method further includes: and performing sparsification processing on the matrix after the transformation processing. The specific implementation mode is as follows: searching pixel information which is smaller than a preset threshold value and exists in the matrix after the transformation processing; the found pixel information is set to 0.
The following explains a specific implementation method for calculating the relevant information between each pixel point in the disparity map and other pixel points in the neighborhood range thereof by a specific embodiment.
In this embodiment, a single pixel often reflects only the degree of change of the position of the pixel, but for an image, the change of interest is necessarily continuous, so the information of the surrounding points of the pixel needs to be taken into consideration. In the embodiment, the pixel points in the adjacent range of one pixel point are added, the selected neighborhood range is a region with the size of 5 × 5, the selected neighborhood range is flattened, the information of one pixel point is changed into a vector with the size of 1 × 25 from one value, and the vector represents the contribution value of the pixel point and the surrounding points to the pixel point.
In order to avoid the special result of the special graph, only a part of points are selected from one graph for calculation, and the selection method is multiple, and the main principle of ensuring that the selected points are uniformly distributed, are not repeated and are moderate in quantity is adopted.
And determining a selection scheme. In this embodiment, a sampling manner of uniform random distribution is adopted, a neighborhood range is selected to be a 5 × 5 region, the size of the picture is 1200 × 1200, under the condition of no repetition, one picture has 240 × 240 5 regions, 240 × 240 pixel points are obtained, the distance between each pixel point and the adjacent pixel points on the upper side, the lower side, the left side and the right side is 5, the distribution of the pixel points covers the whole picture, and the distribution is uniform, so that the principle that the selected points are uniformly distributed, not repeated and moderate in number is satisfied.
Vectors formed by the selected pixel points and the neighborhoods thereof form a vector set which can be regarded as a matrix, and each column of the matrix represents the vector of one pixel point. The operation steps are to make the 25-dimensional matrix of 240 × 240 pixels obtained before into a matrix X with 25 rows and 240 × 240 columns.
And reducing the dimension of the pixel point vector. The vectors usually have a linear relationship, and in order to reduce the dimension of the vectors, the eigenvectors of this matrix are found. The specific operation steps are as follows:
and (6) equalizing. The 25-dimensional vectors of 240 × 240 pixels are combined into a matrix X with 25 rows and 240 × 240 columns, and zero-averaging is performed on each row of X, i.e., the average value of the row is subtracted.
A covariance matrix is calculated. For the matrix X which is averaged before, the covariance matrix C is obtained according to the following formula:
and solving a transformation matrix. And (3) solving the characteristic value and the characteristic vector of the C, and forming the characteristic vector into a new matrix from top to bottom according to the size of the characteristic value to obtain the transformation matrix K which needs to be found, wherein the dimension of the matrix is 25 x 25.
The transformation is applied to all pixel points. The K obtained in the previous step is obtained from some points of a picture, and it is considered that this matrix K includes linear transformations applied to all points in order to reduce the data dimension.
Including all the pixel points. For each pixel point of the disparity map, a neighborhood range of 5 × 5 is taken to obtain a vector, namely a total of 1200 × 1200 vectors, and for the edge point, the neighborhood range which does not exist is supplemented to be 0.
A new matrix is composed. 25 is the dimension of a vector, 1200X 1200 is the total number of pixels in the whole image, and all vectors are formed into a new matrix X _ n with 1200X 1200 columns and the number of rows is 25.
A transformation is applied. Applying transformation K to X _ n according to the following equation:
X_n=K·X_n
the result is a 25 row 1200X 1200 column matrix, but this matrix should be more sparse than the original X _ n, or have more values close to 0.
And (5) thinning. For thinning purposes, a threshold value of 5 is set, values less than 5 in the result are all set to 0, and nonlinear transformation is introduced in linear change.
Through the steps, a difference graph formed by 1200 pixels is changed into a sparse matrix with 25 rows and 1200 columns.
In the embodiment of the present invention, since many pixel points in the difference image are some changes that are not concerned, the change that is often interested in is large and stable, and therefore, the detected pixel points can be regarded as two main types: the real change points with aggregation and continuity and the discrete non-change points need to be separated from the adjusted pixel points to form a change area. As shown in fig. 4, step S13 finds a target change region from the disparity map according to the relevant information of each pixel point in the disparity map and other pixel points in the specified neighborhood range thereof, and performs optimization processing on the target change region, specifically including the following steps:
s131, classifying the pixel vectors of all the pixel points in the difference map according to the relevant information of all the pixel points in the difference map and other pixel points in the appointed neighborhood range.
The specific implementation process is as follows: configuring the number of categories of the classification categories; calculating the distance between the pixel vector of the current pixel point and the pixel vectors of other pixel points in the difference map; if the calculated minimum distance is greater than the maximum distance between any two category points, the pixel vector of the current pixel point is divided into a new category, and the two corresponding categories are merged; otherwise, the pixel vector of the current pixel point is classified into the classification category to which the pixel vector with the minimum distance belongs.
S132, searching a target classification category with the least pixel vectors in all classification categories.
And S133, taking the pixel point corresponding to each pixel vector in the target classification category as a target change area.
In this embodiment, since many pixels in the difference image are slightly changed without concern, the change often interested in is large and stable, and therefore the detected pixels can be regarded as two main types: has real changing points of polymerization and continuity and discrete non-changing points.
The vectors are classified. Through the previous steps, the relevant information of each pixel point and the surrounding fields thereof is obtained, and all the pixel points need to be classified according to the information. In theory, all well-established and reliable classification methods can be applied in this process. Because the classification problem has no specific label, that is, it cannot be known which pixel point actually belongs to which classification, only the unsupervised classification method can be applied to the process, and the specifically adopted steps are as follows:
the number of categories is determined. The number of classes to be classified is determined in advance, because no label exists, the number of classes determines the number of elements in each class finally, the number of selected classes does not exceed 5, and the number of selected classes is determined to be 3 finally.
And carrying out cluster classification. Starting from the vector of the first point, adding a new vector every time for calculation, and calculating the distance between the newly added vector and other vectors, wherein the distance can be selected to be L1 distance, L2 distance, histogram distance or the like. In this embodiment, the distance is the chi-squared distance, and for the vector value, the chi-squared distance can be written as:
where a, b are two vectors, N is the vector dimension, the value is 25, if the minimum distance value is greater than the maximum distance between some two classes of points, the new vector is divided into a new class and the corresponding two classes are merged, otherwise, the newly added vector is divided into the class to which the vector nearest to it belongs.
A category of the result classification is selected. The right category selected as the final result often determines how good the final result is, and empirical results show that the changed points that really need attention are in the least number of categories after classification. Therefore, in this embodiment, a preset program is used to implement classification of the selection result, that is, a classification set with the least number of vectors in the 3 major classes in the selection result is considered, and pixel points corresponding to all vectors in the set are considered to be detected change points.
And S134, carrying out image optimization processing on the target change area.
The resulting target variation area may still include some noise interference due to large-area land structure changes, as represented by small, scattered, irregular spots on the image. These are indeed part of the variation, but do not require attention in the application, so the target variation region needs to be subjected to image optimization processing to remove it.
The specific implementation process is as follows: scanning pixel points in the target change area in sequence by adopting a scanning unit with a preset size; performing convolution operation on the pixel points in the scanning unit by using a first preset matrix, and removing the pixel points currently positioned in the central position of the scanning unit if the convolution result is less than or equal to a preset first threshold value; and performing convolution operation on the pixel points in the scanning unit by using a second preset matrix, and if the convolution result is smaller than a preset second threshold, performing pixel information resetting on all the pixel points in the scanning unit according to the pixel point currently located at the central position of the scanning unit.
The following explanation of the optimization process of the resulting image is achieved by a specific embodiment.
First, the convolution operation is used to scan the whole picture and determine whether each pixel should be preserved or not to remove small noise points.
A convolution kernel is determined. And defining a square matrix with small length as a first preset matrix, wherein the value of the matrix is 0 or 1.
the convolution operation removes noise points. And performing convolution operation on each 5 x 5 area of the obtained image, setting a threshold value to be 9, and considering the central point of the corresponding square matrix as needing to be reserved only if the convolution result is greater than 9, otherwise, removing the central point.
Secondly, the whole picture is scanned by convolution operation to determine whether the space around each pixel point should be filled so as to enlarge the interested region.
A convolution kernel is determined. And defining a square matrix with small length as a second preset matrix, wherein the value of the matrix is 0 or 1.
and filling in the convolution operation. And performing convolution operation on each 5-by-5 area of each block of the obtained image, setting a threshold value to be 5, and only if the convolution result is less than 5, considering the center point of the corresponding square matrix as needing to be filled, and filling all points corresponding to the rectangle to enlarge the area of interest.
And finally, vectorizing the region. After the removing and amplifying steps, each area in the obtained result is still likely to be irregular, and for the convenience of viewing in practical use, boundary points of a continuous area are found, and the areas are recombined into a relatively regular area according to the boundary points.
The embodiment of the invention provides a method for selecting a better difference image according to the color difference, the illumination condition, the shooting angle and the type of the object needing to be detected and changed of the obtained original image, so that the parameters which are as appropriate as possible are selected according to the image conditions.
The embodiment of the invention provides a process for classifying pixel points by combining the pixel points and neighborhood information thereof, and realizes a transformation process for extracting the correlation degree between the pixel points and the field thereof.
The embodiment of the invention realizes the process of classifying the high-dimensional data under the condition of no label according to the high-dimensional information of the pixel points.
The embodiment of the invention provides a subsequent flow for optimizing the result image with noise, and realizes the purposes of eliminating the noise and amplifying a useful area by utilizing the removing and filling operations, so that the practicability is higher.
For simplicity of explanation, the method embodiments are described as a series of acts or combinations, but those skilled in the art will appreciate that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently with other steps in accordance with the embodiments of the present invention. Further, those skilled in the art will recognize that the embodiments described in this specification are preferred embodiments and that no action is necessarily required by the embodiments.
Fig. 5 schematically shows a structural diagram of a remote sensing image contrast change detection device according to an embodiment of the present invention. Referring to fig. 5, the remote sensing image contrast change detection apparatus according to the embodiment of the present invention specifically includes an extraction module 201, a calculation module 202, and a processing module 203, where:
the extraction module 201 is configured to extract a difference map of two remote sensing images of the same area to be compared at different time phases;
a calculating module 202, configured to calculate relevant information of each pixel point in the disparity map and other pixel points in the specified neighborhood range;
and the processing module 203 is configured to search a target change region from the disparity map according to the relevant information of each pixel point in the disparity map and other pixel points in the specified neighborhood range of the pixel point, and perform optimization processing on the target change region.
In an embodiment of the present invention, the extracting module 201 is specifically configured to extract feature values of R, G, B three channels of two remote sensing images respectively; respectively generating a gray level picture corresponding to each remote sensing image according to the mean value of the characteristic values of R, G, B channels corresponding to each pixel point in the two remote sensing images; and respectively calculating the difference value of pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference image of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
In an embodiment of the present invention, the extracting module 201 is specifically configured to extract feature values of R, G, B three channels of two remote sensing images respectively; respectively carrying out histogram statistics on characteristic values of R, G, B three channels of each remote sensing image; calculating the histogram distance of the three histograms corresponding to each remote sensing image; determining R, G, B weight values occupied by characteristic values of three channels when corresponding gray level pictures are generated according to the size relation of histogram distances of three histograms corresponding to each remote sensing image, and generating the gray level picture corresponding to the current remote sensing image according to the determined weight values; and respectively calculating the difference value of pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference image of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
In an embodiment of the present invention, the extracting module 201 is specifically configured to obtain a target color feature of an object to be detected; respectively extracting characteristic values of R, G, B three channels of the two remote sensing images; calculating the difference value of the characteristic value between the color channel corresponding to the target color characteristic and other channels of each pixel point in each remote sensing image; adjusting the characteristic value of a color channel corresponding to the target color characteristic of each pixel point in each remote sensing image and the weight value occupied by the difference value of the characteristic value between the color channel and other channels when generating a gray level picture, and generating the gray level picture corresponding to the current remote sensing image according to the adjusted weight value; and respectively calculating the difference value of pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference image of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
In the embodiment of the present invention, as shown in fig. 6, the calculating module 202 includes a configuration unit 2021, a first generating unit 2022, a second generating unit 2023, and a calculating unit 2024, where:
a configuration unit 2021, configured to divide a neighborhood range for each pixel point in the disparity map;
the first generating unit 2022 is configured to generate a pixel vector of a pixel point corresponding to the current neighborhood range according to the pixel information of each pixel point in each neighborhood range;
the second generating unit 2023 is configured to generate a target matrix according to the pixel vector of each pixel in the disparity map;
the calculating unit 2024 is configured to perform sparsification on the target matrix, and reflect, by using the sparsified matrix, related information between each pixel in the disparity map and other pixels in the neighborhood range of the pixel.
The calculating unit 2024 is specifically configured to select a target pixel point for change detection from the difference map according to a preset rule; generating a source matrix according to the pixel vectors corresponding to the target pixel points; calculating a transformation matrix of a covariance matrix of the source matrix; and transforming the target matrix according to the transformation matrix.
Further, the calculating unit 2024 is further configured to search for pixel information smaller than a preset threshold value existing in the transformed matrix after the target matrix is transformed according to the transformed matrix; the found pixel information is set to 0.
Further, the calculating unit 2024 is specifically configured to select, in the same pixel row of the disparity map, one target pixel point every time every other than the same number of pixel columns, and select, in the same pixel row, one target pixel point every other than the same number of pixel rows; or dividing the difference image into a plurality of pixel units according to a preset division standard, and selecting pixel points positioned at the same distribution position in each pixel unit as target pixel points.
Further, the calculating unit 2024 is specifically configured to perform zero-averaging processing on each row in the source matrix; calculating a covariance matrix of the matrix after zero-mean processing, and calculating an eigenvalue and an eigenvector of the covariance matrix; and sequentially arranging the eigenvectors from top to bottom according to the magnitude sequence of the eigenvalues to obtain the transformation matrix.
In the embodiment of the present invention, as shown in fig. 7, the processing module 203 specifically includes a classifying unit 2031, a searching unit 2032, and an optimizing unit 2033, where:
a classifying unit 2031, configured to classify the pixel vector of each pixel point in the disparity map according to the related information of each pixel point in the disparity map and other pixel points in the specified neighborhood range thereof;
a searching unit 2032, configured to search for a target classification category with the least pixel vectors in each classification category;
and the optimization processing unit 2033 is configured to use a pixel point corresponding to each pixel vector in the target classification category as a target change region, and perform image optimization processing on the target change region.
In this embodiment, the classifying unit 2031 is specifically configured to configure the number of classes of classification categories; calculating the distance between the pixel vector of the current pixel point and the pixel vectors of other pixel points in the difference map; if the calculated minimum distance is greater than the maximum distance between any two category points, the pixel vector of the current pixel point is divided into a new category, and the two corresponding categories are merged; otherwise, the pixel vector of the current pixel point is classified into the classification category to which the pixel vector with the minimum distance belongs.
In this embodiment, the optimization processing unit 2033 is specifically configured to scan the pixel points in the target change area in sequence by using a scanning unit with a preset size; performing convolution operation on the pixel points in the scanning unit by using a first preset matrix, and removing the pixel points currently positioned in the central position of the scanning unit if the convolution result is less than or equal to a preset first threshold value; and performing convolution operation on the pixel points in the scanning unit by using a second preset matrix, and if the convolution result is smaller than a preset second threshold, performing pixel information resetting on all the pixel points in the scanning unit according to the pixel point currently located at the central position of the scanning unit.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The remote sensing image contrast change detection method and device provided by the embodiment of the invention are suitable for change detection of complex objects in densely distributed urban remote sensing image building regions, can automatically search a target change region from a difference map according to related information of each pixel point in the difference map and other pixel points in an appointed neighborhood range of the pixel point, reduce labor consumption, further accurately and efficiently detect the change region from two remote sensing images, and improve the generalization capability of the change detection method.
Furthermore, an embodiment of the present invention also provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the method as described above.
In this embodiment, if the integrated module/unit of the remote sensing image contrast change detection device is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The electronic device provided by the embodiment of the invention comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the computer program to realize the steps in the embodiments of the method for detecting the contrast change of the remote sensing image, such as S11-S13 shown in FIG. 1. Alternatively, the processor implements the functions of the modules/units in the embodiments of the remote sensing image contrast change detection device, such as the extraction module 201, the calculation module 202, and the processing module 203 shown in fig. 5, when executing the computer program.
Illustratively, the computer program may be partitioned into one or more modules/units that are stored in the memory and executed by the processor to implement the invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used for describing the execution process of the computer program in the remote sensing image contrast change detection device. For example, the computer program may be divided into an extraction module 201, a calculation module 202 and a processing module 203.
The electronic device can be a mobile computer, a notebook, a palm computer, a mobile phone and other devices. The electronic device may include, but is not limited to, a processor, a memory. Those skilled in the art will appreciate that the electronic device in this embodiment may include more or fewer components, or combine certain components, or different components, for example, the electronic device may also include an input-output device, a network access device, a bus, etc.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like that is the control center for the electronic device and that connects the various parts of the overall electronic device using various interfaces and wires.
The memory may be used to store the computer programs and/or modules, and the processor may implement various functions of the electronic device by running or executing the computer programs and/or modules stored in the memory and calling data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than others, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (26)
1. A remote sensing image contrast change detection method is characterized by comprising the following steps:
extracting difference graphs of two remote sensing images of the same region to be compared in different time phases;
calculating the related information of each pixel point in the difference image and other pixel points in the appointed neighborhood range thereof;
and searching a target change region from the difference map according to the related information of each pixel point in the difference map and other pixel points in the appointed neighborhood range of the pixel point, and optimizing the target change region.
2. The method of claim 1, wherein the extracting a difference map of two remote sensing images of the same region to be compared at different time phases comprises:
respectively extracting characteristic values of R, G, B three channels of the two remote sensing images;
respectively generating a gray level picture corresponding to each remote sensing image according to the mean value of the characteristic values of R, G, B channels corresponding to each pixel point in the two remote sensing images;
and respectively calculating the difference value of pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference image of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
3. The method of claim 1, wherein the extracting a difference map of two remote sensing images of the same region to be compared at different time phases comprises:
respectively extracting characteristic values of R, G, B three channels of the two remote sensing images;
respectively carrying out histogram statistics on characteristic values of R, G, B three channels of each remote sensing image;
calculating the histogram distance of the three histograms corresponding to each remote sensing image;
determining R, G, B weight values occupied by characteristic values of three channels when corresponding gray level pictures are generated according to the size relation of histogram distances of three histograms corresponding to each remote sensing image, and generating the gray level picture corresponding to the current remote sensing image according to the determined weight values;
and respectively calculating the difference value of pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference image of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
4. The method of claim 1, wherein the extracting a difference map of two remote sensing images of the same region to be compared at different time phases comprises:
acquiring target color characteristics of an object to be detected;
respectively extracting characteristic values of R, G, B three channels of the two remote sensing images;
calculating the difference value of the characteristic value between the color channel corresponding to the target color characteristic and other channels of each pixel point in each remote sensing image;
adjusting the characteristic value of a color channel corresponding to the target color characteristic of each pixel point in each remote sensing image and the weight value occupied by the difference value of the characteristic value between the color channel and other channels when generating a gray level picture, and generating the gray level picture corresponding to the current remote sensing image according to the adjusted weight value;
and respectively calculating the difference value of pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference image of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
5. The method of claim 1, wherein the calculating the correlation information between each pixel in the disparity map and other pixels in the neighborhood thereof comprises:
dividing a neighborhood range for each pixel point in the disparity map;
generating pixel vectors of pixel points corresponding to the current neighborhood range according to the pixel information of each pixel point in each neighborhood range;
generating a target matrix according to the pixel vector of each pixel point in the difference map;
and performing sparsification treatment on the target matrix, and reflecting the relevant information of each pixel point and other pixel points in the neighborhood range of the pixel point in the disparity map by adopting the matrix after the sparsification treatment.
6. The method of claim 5, wherein the sparsifying of the target matrix comprises:
selecting target pixel points for change detection from the difference graph according to a preset rule;
generating a source matrix according to the pixel vectors corresponding to the target pixel points;
calculating a transformation matrix of a covariance matrix of the source matrix;
and transforming the target matrix according to the transformation matrix.
7. The method of claim 6, wherein after the transforming the target matrix according to the transformation matrix, the method further comprises:
performing sparsification processing on the matrix after the transformation processing;
the sparse processing of the matrix after the transformation processing includes:
searching pixel information which is smaller than a preset threshold value and exists in the matrix after the transformation processing;
the found pixel information is set to 0.
8. The method according to claim 6, wherein the selecting a target pixel point for change detection from the difference map according to a preset rule comprises:
selecting a target pixel point in the same pixel row of the difference map at intervals of the same number of pixel columns, and selecting a target pixel point in the same pixel row at intervals of the same number of pixel rows; or
And dividing the difference image into a plurality of pixel units according to a preset division standard, and selecting pixel points positioned at the same distribution position in each pixel unit as target pixel points.
9. The method of claim 6, wherein the computing the transformation matrix of the covariance matrix of the source matrix comprises:
performing zero-averaging processing on each row in the source matrix;
calculating a covariance matrix of the matrix after zero-mean processing, and calculating an eigenvalue and an eigenvector of the covariance matrix;
and sequentially arranging the eigenvectors from top to bottom according to the magnitude sequence of the eigenvalues to obtain the transformation matrix.
10. The method according to claim 5, wherein the searching for the target change region from the disparity map according to the related information of each pixel point in the disparity map and other pixel points in the designated neighborhood range thereof, and performing optimization processing on the target change region comprises:
classifying the pixel vectors of all the pixel points in the disparity map according to the relevant information of all the pixel points in the disparity map and other pixel points in the appointed neighborhood range of the pixel points;
searching a target classification category with the least pixel vectors in all classification categories;
taking pixel points corresponding to all pixel vectors in the target classification category as target change areas;
and carrying out image optimization processing on the target change area.
11. The method of claim 10, wherein classifying the pixel vector of each pixel in the disparity map according to information about each pixel in the disparity map and other pixels in the assigned neighborhood of the pixel comprises:
configuring the number of categories of the classification categories;
calculating the distance between the pixel vector of the current pixel point and the pixel vectors of other pixel points in the difference map;
if the calculated minimum distance is greater than the maximum distance between any two category points, the pixel vector of the current pixel point is divided into a new category, and the two corresponding categories are merged;
otherwise, the pixel vector of the current pixel point is classified into the classification category to which the pixel vector with the minimum distance belongs.
12. The method according to claim 10, wherein the performing image optimization processing on the target change region comprises:
scanning pixel points in the target change area in sequence by adopting a scanning unit with a preset size;
performing convolution operation on the pixel points in the scanning unit by using a first preset matrix, and removing the pixel points currently positioned in the central position of the scanning unit if the convolution result is less than or equal to a preset first threshold value;
and performing convolution operation on the pixel points in the scanning unit by using a second preset matrix, and if the convolution result is smaller than a preset second threshold, performing pixel information resetting on all the pixel points in the scanning unit according to the pixel point currently located at the central position of the scanning unit.
13. A remote sensing image contrast change detection device, characterized in that the device comprises:
the extraction module is used for extracting difference graphs of two remote sensing images of the same region to be compared in different time phases;
the calculation module is used for calculating the related information of each pixel point in the difference map and other pixel points in the appointed neighborhood range;
and the processing module is used for searching a target change region from the difference map according to the related information of each pixel point in the difference map and other pixel points in the appointed neighborhood range of the pixel point, and optimizing the target change region.
14. The device according to claim 13, wherein the extraction module is specifically configured to extract characteristic values of R, G, B three channels of two remote sensing images respectively; respectively generating a gray level picture corresponding to each remote sensing image according to the mean value of the characteristic values of R, G, B channels corresponding to each pixel point in the two remote sensing images; and respectively calculating the difference value of pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference image of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
15. The device according to claim 13, wherein the extraction module is specifically configured to extract characteristic values of R, G, B three channels of two remote sensing images respectively; respectively carrying out histogram statistics on characteristic values of R, G, B three channels of each remote sensing image; calculating the histogram distance of the three histograms corresponding to each remote sensing image; determining R, G, B weight values occupied by characteristic values of three channels when corresponding gray level pictures are generated according to the size relation of histogram distances of three histograms corresponding to each remote sensing image, and generating the gray level picture corresponding to the current remote sensing image according to the determined weight values; and respectively calculating the difference value of pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference image of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
16. The device according to claim 13, wherein the extraction module is specifically configured to obtain a target color feature of the object to be detected; respectively extracting characteristic values of R, G, B three channels of the two remote sensing images; calculating the difference value of the characteristic value between the color channel corresponding to the target color characteristic and other channels of each pixel point in each remote sensing image; adjusting the characteristic value of a color channel corresponding to the target color characteristic of each pixel point in each remote sensing image and the weight value occupied by the difference value of the characteristic value between the color channel and other channels when generating a gray level picture, and generating the gray level picture corresponding to the current remote sensing image according to the adjusted weight value; and respectively calculating the difference value of pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference image of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
17. The apparatus of claim 13, wherein the computing module comprises:
the configuration unit is used for dividing a neighborhood range for each pixel point in the disparity map;
the first generation unit is used for generating pixel vectors of pixel points corresponding to the current neighborhood range according to the pixel information of each pixel point in each neighborhood range;
the second generating unit is used for generating a target matrix according to the pixel vector of each pixel point in the difference map;
and the calculation unit is used for performing sparsification processing on the target matrix, and reflecting the relevant information of each pixel point in the disparity map and other pixel points in the neighborhood range by adopting the matrix after the sparsification processing.
18. The apparatus according to claim 17, wherein the computing unit is specifically configured to select a target pixel point for change detection from the difference map according to a preset rule; generating a source matrix according to the pixel vectors corresponding to the target pixel points; calculating a transformation matrix of a covariance matrix of the source matrix; and transforming the target matrix according to the transformation matrix.
19. The apparatus according to claim 18, wherein the computing unit is further configured to, after performing transform processing on the target matrix according to the transform matrix, find pixel information smaller than a preset threshold value existing in the transform-processed matrix; the found pixel information is set to 0.
20. The apparatus according to claim 18, wherein the computing unit is specifically configured to select, in a same pixel row of the disparity map, one target pixel point every time every other than the same number of pixel columns, and select, in the same pixel row, one target pixel point every other than the same number of pixel rows; or dividing the difference image into a plurality of pixel units according to a preset division standard, and selecting pixel points positioned at the same distribution position in each pixel unit as target pixel points.
21. The apparatus according to claim 18, wherein the computing unit is specifically configured to perform zero-averaging processing on each row in the source matrix; calculating a covariance matrix of the matrix after zero-mean processing, and calculating an eigenvalue and an eigenvector of the covariance matrix; and sequentially arranging the eigenvectors from top to bottom according to the magnitude sequence of the eigenvalues to obtain the transformation matrix.
22. The apparatus of claim 21, wherein the processing module comprises:
the classification unit is used for classifying the pixel vectors of all the pixel points in the disparity map according to the relevant information of all the pixel points in the disparity map and other pixel points in the appointed neighborhood range;
the searching unit is used for searching a target classification category with the least pixel vectors in each classification category;
and the optimization processing unit is used for taking the pixel points corresponding to the pixel vectors in the target classification category as a target change area and carrying out image optimization processing on the target change area.
23. The apparatus according to claim 22, wherein the classification unit is specifically configured to configure a category number of classification categories; calculating the distance between the pixel vector of the current pixel point and the pixel vectors of other pixel points in the difference map; if the calculated minimum distance is greater than the maximum distance between any two category points, the pixel vector of the current pixel point is divided into a new category, and the two corresponding categories are merged; otherwise, the pixel vector of the current pixel point is classified into the classification category to which the pixel vector with the minimum distance belongs.
24. The apparatus according to claim 22, wherein the optimization processing unit is specifically configured to sequentially scan the pixels in the target change region by using a scanning unit with a preset size; performing convolution operation on the pixel points in the scanning unit by using a first preset matrix, and removing the pixel points currently positioned in the central position of the scanning unit if the convolution result is less than or equal to a preset first threshold value; and performing convolution operation on the pixel points in the scanning unit by using a second preset matrix, and if the convolution result is smaller than a preset second threshold, performing pixel information resetting on all the pixel points in the scanning unit according to the pixel point currently located at the central position of the scanning unit.
25. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 12.
26. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1-12 are implemented when the processor executes the program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911305648.9A CN111179230B (en) | 2019-12-18 | 2019-12-18 | Remote sensing image contrast change detection method and device, storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911305648.9A CN111179230B (en) | 2019-12-18 | 2019-12-18 | Remote sensing image contrast change detection method and device, storage medium and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111179230A true CN111179230A (en) | 2020-05-19 |
CN111179230B CN111179230B (en) | 2023-06-09 |
Family
ID=70646288
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911305648.9A Active CN111179230B (en) | 2019-12-18 | 2019-12-18 | Remote sensing image contrast change detection method and device, storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111179230B (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111950607A (en) * | 2020-07-28 | 2020-11-17 | 深圳市元征科技股份有限公司 | Reminding method, reminding device and server |
CN112288639A (en) * | 2020-07-20 | 2021-01-29 | 深圳市智影医疗科技有限公司 | Image contrast subtraction method and device, terminal equipment and storage medium |
CN112464803A (en) * | 2020-11-26 | 2021-03-09 | 泰康保险集团股份有限公司 | Image comparison method and device |
CN112560740A (en) * | 2020-12-23 | 2021-03-26 | 中国水利水电科学研究院 | PCA-Kmeans-based visible light remote sensing image change detection method |
CN112766264A (en) * | 2021-01-25 | 2021-05-07 | 广州互联网法院 | Picture comparison method, electronic device and computer readable storage medium |
CN113554011A (en) * | 2021-09-22 | 2021-10-26 | 航天宏图信息技术股份有限公司 | Method, device and equipment for detecting change of remote sensing image and storage medium |
CN113657338A (en) * | 2021-08-25 | 2021-11-16 | 平安科技(深圳)有限公司 | Transmission state identification method and device, computer equipment and storage medium |
CN113724205A (en) * | 2021-08-09 | 2021-11-30 | 浙江大华技术股份有限公司 | Image change detection method, apparatus and storage medium |
CN114049568A (en) * | 2021-11-29 | 2022-02-15 | 中国平安财产保险股份有限公司 | Object shape change detection method, device, equipment and medium based on image comparison |
CN115164845A (en) * | 2022-07-04 | 2022-10-11 | 新疆维吾尔自治区环境保护科学研究院 | Ecological restoration remote sensing automatic detection device and method |
CN116258968A (en) * | 2023-05-10 | 2023-06-13 | 蒙阴县国有林场总场(蒙阴县国有岱崮林场) | Method and system for managing fruit diseases and insects |
CN116758067A (en) * | 2023-08-16 | 2023-09-15 | 梁山县成浩型钢有限公司 | Metal structural member detection method based on feature matching |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103473755A (en) * | 2013-09-07 | 2013-12-25 | 西安电子科技大学 | SAR image sparsing denoising method based on change detection |
CN104091151A (en) * | 2014-06-30 | 2014-10-08 | 南京信息工程大学 | Vehicle identification method based on Gabor feature extraction and sparse representation |
-
2019
- 2019-12-18 CN CN201911305648.9A patent/CN111179230B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103473755A (en) * | 2013-09-07 | 2013-12-25 | 西安电子科技大学 | SAR image sparsing denoising method based on change detection |
CN104091151A (en) * | 2014-06-30 | 2014-10-08 | 南京信息工程大学 | Vehicle identification method based on Gabor feature extraction and sparse representation |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112288639A (en) * | 2020-07-20 | 2021-01-29 | 深圳市智影医疗科技有限公司 | Image contrast subtraction method and device, terminal equipment and storage medium |
CN111950607A (en) * | 2020-07-28 | 2020-11-17 | 深圳市元征科技股份有限公司 | Reminding method, reminding device and server |
CN112464803A (en) * | 2020-11-26 | 2021-03-09 | 泰康保险集团股份有限公司 | Image comparison method and device |
CN112560740A (en) * | 2020-12-23 | 2021-03-26 | 中国水利水电科学研究院 | PCA-Kmeans-based visible light remote sensing image change detection method |
CN112766264A (en) * | 2021-01-25 | 2021-05-07 | 广州互联网法院 | Picture comparison method, electronic device and computer readable storage medium |
CN112766264B (en) * | 2021-01-25 | 2024-06-07 | 广州互联网法院 | Picture comparison method, electronic device and computer readable storage medium |
CN113724205A (en) * | 2021-08-09 | 2021-11-30 | 浙江大华技术股份有限公司 | Image change detection method, apparatus and storage medium |
CN113657338A (en) * | 2021-08-25 | 2021-11-16 | 平安科技(深圳)有限公司 | Transmission state identification method and device, computer equipment and storage medium |
CN113554011B (en) * | 2021-09-22 | 2022-01-04 | 航天宏图信息技术股份有限公司 | Method, device and equipment for detecting change of remote sensing image and storage medium |
CN113554011A (en) * | 2021-09-22 | 2021-10-26 | 航天宏图信息技术股份有限公司 | Method, device and equipment for detecting change of remote sensing image and storage medium |
CN114049568A (en) * | 2021-11-29 | 2022-02-15 | 中国平安财产保险股份有限公司 | Object shape change detection method, device, equipment and medium based on image comparison |
CN115164845A (en) * | 2022-07-04 | 2022-10-11 | 新疆维吾尔自治区环境保护科学研究院 | Ecological restoration remote sensing automatic detection device and method |
CN115164845B (en) * | 2022-07-04 | 2024-04-05 | 新疆维吾尔自治区环境保护科学研究院 | Ecological restoration remote sensing automatic detection device and method |
CN116258968A (en) * | 2023-05-10 | 2023-06-13 | 蒙阴县国有林场总场(蒙阴县国有岱崮林场) | Method and system for managing fruit diseases and insects |
CN116758067A (en) * | 2023-08-16 | 2023-09-15 | 梁山县成浩型钢有限公司 | Metal structural member detection method based on feature matching |
CN116758067B (en) * | 2023-08-16 | 2023-12-01 | 梁山县成浩型钢有限公司 | Metal structural member detection method based on feature matching |
Also Published As
Publication number | Publication date |
---|---|
CN111179230B (en) | 2023-06-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111179230B (en) | Remote sensing image contrast change detection method and device, storage medium and electronic equipment | |
CN111192239B (en) | Remote sensing image change area detection method and device, storage medium and electronic equipment | |
US10803554B2 (en) | Image processing method and device | |
CN109886997B (en) | Identification frame determining method and device based on target detection and terminal equipment | |
US9990546B2 (en) | Method and apparatus for determining target region in video frame for target acquisition | |
US9865063B2 (en) | Method and system for image feature extraction | |
US9519660B2 (en) | Information processing apparatus, clustering method, and recording medium storing clustering program | |
KR101618996B1 (en) | Sampling method and image processing apparatus for estimating homography | |
CN116012364B (en) | SAR image change detection method and device | |
US20170178341A1 (en) | Single Parameter Segmentation of Images | |
Li et al. | Unsupervised road extraction via a Gaussian mixture model with object-based features | |
US20170309024A1 (en) | Image processing device, image processing method, and computer-readable recording medium | |
CN111199188B (en) | Pixel processing method, device, storage medium and equipment of remote sensing image difference map | |
CN112053371A (en) | Water body extraction method and device in remote sensing image | |
CN116958962A (en) | Method for detecting pre-fruit-thinning pomegranate fruits based on improved YOLOv8s | |
Engstrom et al. | Evaluating the Relationship between Contextual Features Derived from Very High Spatial Resolution Imagery and Urban Attributes: A Case Study in Sri Lanka | |
Zhang et al. | Saliency detection via image sparse representation and color features combination | |
CN116645499A (en) | Determination method, determination device, determination apparatus, determination device, determination program storage medium, and determination program product | |
CN111062341A (en) | Video image area classification method, device, equipment and storage medium | |
Vandenbroucke et al. | Unsupervised color-image segmentation by multicolor space iterative pixel classification | |
CN117501326A (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
CN110533050B (en) | Picture geographic information acquisition method and device, computer equipment and storage medium | |
CN118446938B (en) | Shadow area restoration method and device for remote sensing image and electronic equipment | |
CN117409372B (en) | Dense crowd counting method and device based on global and local density fusion | |
CN116579960B (en) | Geospatial data fusion method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |