CN111179230B - Remote sensing image contrast change detection method and device, storage medium and electronic equipment - Google Patents

Remote sensing image contrast change detection method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111179230B
CN111179230B CN201911305648.9A CN201911305648A CN111179230B CN 111179230 B CN111179230 B CN 111179230B CN 201911305648 A CN201911305648 A CN 201911305648A CN 111179230 B CN111179230 B CN 111179230B
Authority
CN
China
Prior art keywords
pixel
remote sensing
pixel point
matrix
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911305648.9A
Other languages
Chinese (zh)
Other versions
CN111179230A (en
Inventor
陈欢欢
江贻芳
任才俊
汪勇
黄恩兴
于娜
高健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Stargis Tianjin Technology Development Co ltd
University of Science and Technology of China USTC
Original Assignee
Stargis Tianjin Technology Development Co ltd
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Stargis Tianjin Technology Development Co ltd, University of Science and Technology of China USTC filed Critical Stargis Tianjin Technology Development Co ltd
Priority to CN201911305648.9A priority Critical patent/CN111179230B/en
Publication of CN111179230A publication Critical patent/CN111179230A/en
Application granted granted Critical
Publication of CN111179230B publication Critical patent/CN111179230B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a remote sensing image contrast change detection method, a device, a storage medium and electronic equipment, wherein the method comprises the following steps: extracting difference images of two remote sensing images of different time phases in the same region to be compared; calculating related information of each pixel point in the difference map and other pixel points in a specified neighborhood range; searching a target change area from the difference map according to the related information of each pixel point in the difference map and other pixel points in the appointed neighborhood range, and optimizing the target change area. According to the method and the device, the target change area can be searched from the difference map according to the related information of each pixel point in the difference map and other pixel points in the appointed neighborhood range, so that the change area can be efficiently detected from two remote sensing images, and the generalization capability of the change detection method is improved.

Description

Remote sensing image contrast change detection method and device, storage medium and electronic equipment
Technical Field
The present invention relates to the field of remote sensing image processing technologies, and in particular, to a method and apparatus for detecting contrast change of a remote sensing image, a storage medium, and an electronic device.
Background
The contrast change of the remote sensing image refers to changes displayed in remote sensing pictures of the same region at different time points, and the changes comprise building increase, vegetation coverage, land development and utilization and the like. The change detection is to process and compare pictures, find out the position changed along with time, and identify the pixel point corresponding to the changed region.
Change Detection (CD) plays a very important role in accurately understanding surface changes by using existing remote sensing image data. Because the remote sensing image has the characteristics of easy acquisition and short updating period, the change area is accurately and timely detected from the remote sensing image, and the method has important significance for various works such as urban management and planning, urban assessment, post-disaster reconstruction and the like. And by combining the newly-built buildings in each detected area with the area where the newly-built buildings are located, illegal buildings can be effectively detected, and urban standardized management is facilitated. Analyzing the degree of intensive changes in the remote sensing images facilitates understanding of the development rate of each region and the intensive regions of building changes, which is helpful for estimating the development progress of the region and evaluating the development degree of the region. By locating the region with severe change, whether the region subjected to natural disasters such as earthquake is rebuilt according to a planning mode can be judged, and the process of rebuilding after the disaster is supervised.
In order to obtain the change condition and the position of the image, related industries such as mapping and the like use tools such as ARCGIS and the like to compare two remote sensing images in a manual mode, and mark the region where the change occurs. Because the buildings are densely distributed, the changes existing in each area need to be carefully compared, and meanwhile, because the remote sensing images have the characteristics of low contrast, dense urban building distribution, large area range needing to be detected and the like, a great deal of time and energy are consumed for manually marking the changes in the remote sensing images. Therefore, how to efficiently label the change area from two remote sensing images, so as to reduce the manpower consumption and improve the generalization capability is a problem to be solved urgently.
Disclosure of Invention
The invention provides a remote sensing image contrast change detection method, a remote sensing image contrast change detection device, a storage medium and electronic equipment, and solves the problems that in the prior art, a change area is difficult to detect from two remote sensing images efficiently, and the generalization capability of the existing change detection method is poor.
In one aspect of the present invention, a method for detecting contrast variation of a remote sensing image is provided,
extracting difference images of two remote sensing images of different time phases in the same region to be compared;
Calculating related information of each pixel point in the difference map and other pixel points in a specified neighborhood range;
searching a target change area from the difference map according to the related information of each pixel point in the difference map and other pixel points in the appointed neighborhood range, and optimizing the target change area.
Optionally, the extracting a difference map of two remote sensing images of different time phases in the same region to be compared includes:
extracting characteristic values of R, G, B three channels of two remote sensing images respectively;
respectively generating gray level pictures corresponding to each remote sensing image according to the average value of the characteristic values of R, G, B channels corresponding to each pixel point in the two remote sensing images;
and respectively calculating the difference value of the pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference map of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
Optionally, the extracting a difference map of two remote sensing images of different time phases in the same region to be compared includes:
extracting characteristic values of R, G, B three channels of two remote sensing images respectively;
respectively carrying out histogram statistics on the characteristic values of R, G, B channels of each remote sensing image;
Calculating the histogram distances of the three histograms corresponding to each remote sensing image;
according to the size relation of the histogram distances of the three histograms corresponding to each remote sensing image, determining the weight value occupied by the characteristic values of the three channels in R, G, B when the corresponding gray level picture is generated, and generating the gray level picture corresponding to the current remote sensing image according to the determined weight value;
and respectively calculating the difference value of the pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference map of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
Optionally, the extracting a difference map of two remote sensing images of different time phases in the same region to be compared includes:
acquiring target color characteristics of an object to be detected;
extracting characteristic values of R, G, B three channels of two remote sensing images respectively;
calculating the difference value of the characteristic value between the color channel corresponding to the target color characteristic and other channels of each pixel point in each remote sensing image;
adjusting the weight value occupied by the characteristic value of the color channel corresponding to the target color characteristic of each pixel point in each remote sensing image and the difference value of the characteristic value between the color channel and other channels when the gray level image is generated, and generating the gray level image corresponding to the current remote sensing image according to the adjusted weight value;
And respectively calculating the difference value of the pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference map of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
Optionally, the calculating the related information of each pixel point in the difference map and other pixel points in the neighborhood range includes:
dividing a neighborhood range for each pixel point in the difference map;
generating a pixel vector of a pixel point corresponding to the current neighborhood range according to the pixel information of each pixel point in each neighborhood range;
generating a target matrix according to the pixel vector of each pixel point in the difference graph;
and carrying out sparsification processing on the target matrix, and adopting the matrix subjected to the sparsification processing to embody related information of each pixel point in the difference map and other pixel points in the neighborhood range of the pixel point.
Optionally, the thinning processing on the target matrix includes:
selecting target pixel points for change detection from the difference map according to a preset rule;
generating a source matrix according to the pixel vectors corresponding to each target pixel point;
calculating a transformation matrix of a covariance matrix of the source matrix;
and carrying out transformation processing on the target matrix according to the transformation matrix.
Optionally, after the transforming the target matrix according to the transformation matrix, the method further includes:
performing sparsification treatment on the matrix after the transformation treatment;
the sparse processing of the matrix after the transformation processing comprises the following steps:
searching pixel information which exists in the matrix after the transformation and is smaller than a preset threshold value;
the found pixel information is set to 0.
Optionally, the selecting, according to a preset rule, a target pixel point for detecting a change from the difference map includes:
selecting a target pixel point from the same pixel row of the difference map every time with the same number of pixel columns, and selecting a target pixel point from the same pixel row every time with the same number of pixel columns; or (b)
Dividing the disparity map into a plurality of pixel units according to a preset dividing standard, and selecting pixel points positioned at the same distribution position in each pixel unit as target pixel points.
Optionally, the calculating the transformation matrix of the covariance matrix of the source matrix includes:
zero-equalizing each row in the source matrix;
calculating a covariance matrix of the matrix after zero-mean processing, and calculating eigenvalues and eigenvectors of the covariance matrix;
And sequentially arranging the eigenvectors from top to bottom according to the order of the eigenvalues to obtain the transformation matrix.
Optionally, the searching a target change area from the difference map according to the related information of each pixel point in the difference map and other pixels points in the specified neighborhood range, and performing optimization processing on the target change area includes:
classifying the pixel vectors of each pixel point in the difference map according to the related information of each pixel point in the difference map and other pixel points in the appointed neighborhood range;
searching target classification categories with the least pixel vectors in the classification categories;
taking pixel points corresponding to each pixel vector in the target classification category as a target change area;
and carrying out image optimization processing on the target change area.
Optionally, the classifying the pixel vector of each pixel in the disparity map according to the related information of each pixel in the disparity map and other pixels in the specified neighborhood range includes:
configuring the number of classification categories;
calculating the distance between the pixel vector of the current pixel point and the pixel vectors of other pixel points in the difference map;
If the calculated minimum distance value is larger than the maximum distance between any two category points, the pixel vector of the current pixel point is divided into a new category, and the two corresponding categories are combined;
otherwise, the pixel vector of the current pixel point is classified into the classification category to which the pixel vector having the smallest distance from the current pixel point belongs.
Optionally, the performing image optimization processing on the target change area includes:
scanning pixel points in the target change area sequentially by adopting a scanning unit with a preset size;
performing convolution operation on pixel points in a scanning unit by using a first preset matrix, and removing the pixel point at the center position of the scanning unit currently if the convolution result is smaller than or equal to a preset first threshold value;
and carrying out convolution operation on pixel points in the scanning unit by using a second preset matrix, and if the convolution result is smaller than a preset second threshold value, resetting pixel information of all the pixel points in the scanning unit according to the pixel points currently positioned at the central position of the scanning unit.
In another aspect of the present invention, there is provided a remote sensing image contrast change detection apparatus, the apparatus comprising:
the extraction module is used for extracting difference graphs of two remote sensing images of different time phases in the same region to be compared;
The calculating module is used for calculating the related information of each pixel point in the difference map and other pixel points in the appointed neighborhood range;
and the processing module is used for searching a target change area from the difference map according to the related information of each pixel point in the difference map and other pixel points in the appointed neighborhood range, and optimizing the target change area.
Optionally, the extracting module is specifically configured to extract characteristic values of R, G, B three channels of the two remote sensing images respectively; respectively generating gray level pictures corresponding to each remote sensing image according to the average value of the characteristic values of R, G, B channels corresponding to each pixel point in the two remote sensing images; and respectively calculating the difference value of the pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference map of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
Optionally, the extracting module is specifically configured to extract characteristic values of R, G, B three channels of the two remote sensing images respectively; respectively carrying out histogram statistics on the characteristic values of R, G, B channels of each remote sensing image; calculating the histogram distances of the three histograms corresponding to each remote sensing image; according to the size relation of the histogram distances of the three histograms corresponding to each remote sensing image, determining the weight value occupied by the characteristic values of the three channels in R, G, B when the corresponding gray level picture is generated, and generating the gray level picture corresponding to the current remote sensing image according to the determined weight value; and respectively calculating the difference value of the pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference map of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
Optionally, the extracting module is specifically configured to obtain a target color feature of the object to be detected; extracting characteristic values of R, G, B three channels of two remote sensing images respectively; calculating the difference value of the characteristic value between the color channel corresponding to the target color characteristic and other channels of each pixel point in each remote sensing image; adjusting the weight value occupied by the characteristic value of the color channel corresponding to the target color characteristic of each pixel point in each remote sensing image and the difference value of the characteristic value between the color channel and other channels when the gray level image is generated, and generating the gray level image corresponding to the current remote sensing image according to the adjusted weight value; and respectively calculating the difference value of the pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference map of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
Optionally, the computing module includes:
the configuration unit is used for dividing a neighborhood range for each pixel point in the difference graph;
the first generation unit is used for generating a pixel vector of the pixel point corresponding to the current neighborhood range according to the pixel information of each pixel point in each neighborhood range;
The second generation unit is used for generating a target matrix according to the pixel vector of each pixel point in the difference graph;
and the computing unit is used for carrying out sparsification on the target matrix, and the matrix after the sparsification is adopted to embody the related information of each pixel point in the difference map and other pixel points in the neighborhood range of the pixel point.
Optionally, the calculating unit is specifically configured to select, according to a preset rule, a target pixel point for change detection from the difference map; generating a source matrix according to the pixel vectors corresponding to each target pixel point; calculating a transformation matrix of a covariance matrix of the source matrix; and carrying out transformation processing on the target matrix according to the transformation matrix.
Optionally, the computing unit is further configured to, after performing transformation processing on the target matrix according to the transformation matrix, find pixel information that exists in the transformed matrix and is smaller than a preset threshold; the found pixel information is set to 0.
Optionally, the calculating unit is specifically configured to select, in the same pixel row of the difference map, one target pixel point every time separated by the same number of pixel columns, and select, in the same pixel column, one target pixel point every time separated by the same number of pixel rows; or dividing the disparity map into a plurality of pixel units according to a preset dividing standard, and selecting the pixel points positioned at the same distribution position in each pixel unit as target pixel points.
Optionally, the computing unit is specifically configured to perform zero-mean processing on each row in the source matrix; calculating a covariance matrix of the matrix after zero-mean processing, and calculating eigenvalues and eigenvectors of the covariance matrix; and sequentially arranging the eigenvectors from top to bottom according to the order of the eigenvalues to obtain the transformation matrix.
Optionally, the processing module includes:
the classification unit is used for classifying the pixel vectors of each pixel point in the difference map according to the related information of each pixel point in the difference map and other pixel points in the appointed neighborhood range;
the searching unit is used for searching the target classification category with the least pixel vector in each classification category;
and the optimization processing unit is used for taking the pixel points corresponding to each pixel vector in the target classification category as a target change area and carrying out image optimization processing on the target change area.
Optionally, the classifying unit is specifically configured to configure a category number of the classification category; calculating the distance between the pixel vector of the current pixel point and the pixel vectors of other pixel points in the difference map; if the calculated minimum distance value is larger than the maximum distance between any two category points, the pixel vector of the current pixel point is divided into a new category, and the two corresponding categories are combined; otherwise, the pixel vector of the current pixel point is classified into the classification category to which the pixel vector having the smallest distance from the current pixel point belongs.
Optionally, the optimization processing unit is specifically configured to sequentially scan the pixel points in the target change area by using a scanning unit with a preset size; performing convolution operation on pixel points in a scanning unit by using a first preset matrix, and removing the pixel point at the center position of the scanning unit currently if the convolution result is smaller than or equal to a preset first threshold value; and carrying out convolution operation on pixel points in the scanning unit by using a second preset matrix, and if the convolution result is smaller than a preset second threshold value, resetting pixel information of all the pixel points in the scanning unit according to the pixel points currently positioned at the central position of the scanning unit.
Furthermore, the present invention provides a computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, implements the steps of the method as described above.
The invention further provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, which processor implements the steps of the method as described above when executing the program.
According to the remote sensing image contrast change detection method, the remote sensing image contrast change detection device, the storage medium and the electronic equipment, the target change area can be searched from the difference image according to the related information of each pixel point in the difference image and other pixel points in the appointed neighborhood range of the pixel points, so that the change area can be efficiently detected from two remote sensing images, and the generalization capability of the change detection method is improved.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 is a flow chart of a remote sensing image contrast change detection method according to an embodiment of the invention;
fig. 2 is a schematic diagram of an implementation flow of step S11 in a remote sensing image contrast change detection method according to an embodiment of the present invention;
fig. 3 is a schematic implementation flow chart of step S12 in the remote sensing image contrast change detection method according to the embodiment of the present invention;
fig. 4 is a schematic implementation flow chart of step S13 in the remote sensing image contrast change detection method according to the embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a remote sensing image contrast change detecting device according to an embodiment of the present invention;
fig. 6 is a schematic diagram illustrating an internal structure of a calculation module in a remote sensing image contrast change detection device according to an embodiment of the present invention;
fig. 7 is a schematic diagram illustrating an internal structure of a processing module in a remote sensing image contrast change detecting device according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Fig. 1 schematically illustrates a flowchart of a remote sensing image contrast change detection method according to an embodiment of the present invention. Referring to fig. 1, the remote sensing image contrast change detection method provided by the embodiment of the invention specifically includes steps S11 to S13:
s11, extracting difference images of two remote sensing images of different time phases in the same region to be compared.
S12, calculating related information of each pixel point in the difference map and other pixel points in the appointed neighborhood range.
S13, searching a target change area from the difference map according to the related information of each pixel point in the difference map and other pixel points in the appointed neighborhood range, and optimizing the target change area.
The remote sensing image contrast change detection method provided by the embodiment of the invention is suitable for detecting the change of complex objects in the building areas of the urban remote sensing images with dense distribution, can automatically search the target change area from the difference image according to the related information of each pixel point in the difference image and other pixel points in the appointed neighborhood range, reduces the manpower consumption, further accurately and efficiently detects the change area from the two remote sensing images, and improves the generalization capability of the change detection method.
In the embodiment of the present invention, two remote sensing images with the same shooting location at different times are needed to be obtained first, in this embodiment, the sizes of the pictures are 1200×1200×3, and in this embodiment, the shooting conditions of the two pictures are allowed to be not identical, that is, the difference of chromatic aberration, illumination angle and building inclination angle can exist. The difference image of the image refers to a single-channel image obtained from two three-channel images. The difference map can be obtained by various methods, as shown in fig. 2, including direct difference calculation, adjustment of the weight of the picture channel, and combination of the difference calculation between the pixel channels, and can be selected from the following methods according to the type of difference to be detected:
the method comprises the following steps: and directly performing difference calculation. The least computationally intensive scheme is to average the three channel values for each picture. By adopting the method, the calculated amount can be minimized, all the change positions in the image can be detected indiscriminately, but the numerical difference of two pictures at some unchanged positions can bring larger noise interference to the difference picture due to various reasons such as illumination, cloud layers and the like in the actual image.
The specific implementation method comprises the following steps: extracting characteristic values of R, G, B three channels of two remote sensing images respectively; respectively generating gray level pictures corresponding to each remote sensing image according to the average value of the characteristic values of R, G, B channels corresponding to each pixel point in the two remote sensing images; and respectively calculating the difference value of the pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference map of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
In this embodiment, the characteristic values of the R, G, B channels of the two remote sensing images are extracted respectively, so as to obtain the numerical values of the different channels of the color picture. For each color picture, the three channels (R, G, B) of the picture are separated separately.
The values of the three channels of each graph are taken as the average value of each pixel point, and the integer value of the average value is noted. The following formula is adopted:
Sk=αR+βG+γB
α=0.33,β=0.33,γ=0.33
and calculating the difference of the two gray level pictures, directly subtracting the two pixel points at the corresponding positions, and taking the absolute value of the result to obtain the required difference picture. Specifically, S1 and S2 of the two pictures, the difference map is calculated as the absolute value of the difference between S1 and S2:
S=|Sl-S2|
the second method is as follows: and adjusting the picture channel weight calculation. The channel weight of an image refers to the proportion of the values of the three channels in the final result when forming the gray scale. The two different pictures usually have different illumination intensities and color differences, the influence range covers the whole picture, and in order to solve the problem, the adopted method is to calculate the distance between corresponding histograms of the color channels and determine the weight of the new channel in calculation, so that the interference caused by the color differences and illumination can be reduced to a great extent.
The specific implementation method comprises the following steps: extracting characteristic values of R, G, B three channels of two remote sensing images respectively; respectively carrying out histogram statistics on the characteristic values of R, G, B channels of each remote sensing image; calculating the histogram distances of the three histograms corresponding to each remote sensing image; according to the size relation of the histogram distances of the three histograms corresponding to each remote sensing image, determining the weight value occupied by the characteristic values of the three channels in R, G, B when the corresponding gray level picture is generated, and generating the gray level picture corresponding to the current remote sensing image according to the determined weight value; and respectively calculating the difference value of the pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference map of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
In this embodiment, the pixel values of each channel of each picture are counted, and the value range of the pixel points is an integer value of 0-255, so that the number of the pixel points with different values is counted, each gradient histogram has 256 columns, and 3 gradient histograms of three channels are calculated.
Specifically, the distance between the three corresponding histograms can be calculated according to the following formula, wherein the calculation formula of the histogram distance is as follows:
Figure BDA0002322988940000121
Figure BDA0002322988940000122
Where fj is the value of the jth histogram in a histogram.
The channel with the smallest distance is G, the weight of the channel is set to be 0.2, the channel with the largest distance is R, the weight of the channel is set to be 0.5, and interference caused by chromatic aberration and illumination is reduced.
And according to the weight calculation, S1 and S2 of the two pictures are obtained, and the calculated difference graph is the absolute value of the difference between S1 and S2.
And a third method: and calculating the difference value between the pixel point channels. The proportion of the difference of the corresponding channels can be adjusted to take account of the different targets that need to be detected for the change. When the object to be detected has a certain color characteristic, the weight is adjusted according to the characteristic.
The specific implementation method comprises the following steps: acquiring target color characteristics of an object to be detected; extracting characteristic values of R, G, B three channels of two remote sensing images respectively; calculating the difference value of the characteristic value between the color channel corresponding to the target color characteristic and other channels of each pixel point in each remote sensing image; adjusting the weight value occupied by the characteristic value of the color channel corresponding to the target color characteristic of each pixel point in each remote sensing image and the difference value of the characteristic value between the color channel and other channels when the gray level image is generated, and generating the gray level image corresponding to the current remote sensing image according to the adjusted weight value; and respectively calculating the difference value of the pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference map of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
In this embodiment, it is the house change that needs to be detected, and in the image, the house that usually changes in open field has a red or blue roof, while the change in green vegetation, the ground surface is not required.
Calculating the difference value of each of the three channels of each picture, setting the value of the difference value smaller than 0 as 0, and calculating R, G, B total 6 difference values among the three channels respectively.
Weights for the channel differences are determined. The most interesting building changes now are those for which there is no concern. The vegetation and the water area are obviously green, so that the point with large difference between the green channel (G) and the other two blue channels (B) and the red channel (R) in the pixel point does not need to pay attention, the weight of the point is reduced, the weight of the difference between the R channel and the other channels is increased, and optionally, alpha is set to be 0.4, beta is set to be 0.4, and gamma is set to be 0.2.
And generating a difference map. According to the weights determined above, the new gray-scale map for each map is calculated to be most interesting for the red component according to the following formula, and the result can be written as:
S=α(R-B)+β(R-G)+γR
α+β+γ=1
in the embodiment of the invention, the primary acquisition of the difference map of the remote sensing image is realized according to the step S11, and the preliminarily obtained difference map contains almost all tiny change areas and cannot be used for directly generating the final change image, so that partial pixel points need to be sampled, and the pixel points are processed and changed by combining the neighborhood information of each pixel point. As shown in fig. 3, step S12 calculates related information between each pixel point in the disparity map and other pixels in the neighborhood range, and specifically includes the following steps:
S121, dividing a neighborhood range for each pixel point in the difference map.
S122, generating a pixel vector of the pixel corresponding to the current neighborhood range according to the pixel information of each pixel in each neighborhood range.
S123, generating a target matrix according to the pixel vector of each pixel point in the difference graph.
S124, performing sparsification processing on the target matrix, and adopting the matrix after the sparsification processing to embody related information of each pixel point in the difference map and other pixel points in the neighborhood range of the pixel point.
The thinning processing is performed on the target matrix, which specifically includes steps not shown in the following figures:
s1241, a target pixel point for detecting the change is selected from the difference map according to a preset rule. Specifically, in order to avoid the special result of the private graph, only a part of points are selected from one graph for calculation, and the selection method has a plurality of main guarantee principles that the selected points are uniformly distributed, are not repeated and have moderate quantity, the invention adopts a sampling mode with uniform random distribution to realize the selection of the target pixel points, and the specific selection modes comprise but are not limited to the following two modes: selecting a target pixel point from the same pixel row of the difference map every time with the same number of pixel columns, and selecting a target pixel point from the same pixel row every time with the same number of pixel columns; dividing the disparity map into a plurality of pixel units according to a preset dividing standard, and selecting pixel points positioned at the same distribution position in each pixel unit as target pixel points.
S1242, a source matrix is generated according to the pixel vectors corresponding to the target pixel points.
S1243, a transformation matrix of the covariance matrix of the source matrix is calculated. In this embodiment, in order to realize linear transformation of a target matrix, the present invention realizes transformation processing on the target matrix by calculating a transformation matrix corresponding to an source matrix, where the method for calculating the transformation matrix includes: zero-equalizing each row in the source matrix; calculating a covariance matrix of the matrix after zero-mean processing, and calculating eigenvalues and eigenvectors of the covariance matrix; and sequentially arranging the eigenvectors from top to bottom according to the order of the eigenvalues to obtain the transformation matrix.
S1244, performing transformation processing on the target matrix according to the transformation matrix.
Further, after the transforming the target matrix according to the transformation matrix, the method further includes: and (3) performing sparsification processing on the matrix after the transformation processing. The specific implementation mode is as follows: searching pixel information which exists in the matrix after the transformation and is smaller than a preset threshold value; the found pixel information is set to 0.
The following explains a specific implementation method for calculating the related information of each pixel point in the disparity map and other pixels in the neighborhood range of the pixel point by using a specific embodiment.
In this embodiment, a single pixel point can only reflect the degree of change of its position, but the change of interest is necessarily continuous for the image, so the information of the surrounding points of the pixel point needs to be taken into consideration. In this embodiment, a pixel in an adjacent range of a pixel is added, a region with a size of 5*5 is selected and flattened, and the information of the pixel is changed from a value to a vector 1×25 as a vector, where the vector represents the contribution value of the pixel and its surrounding points to the pixel.
In order to avoid the special result of the special graph, only a part of points are selected from one graph for calculation, and the selection method has a plurality of methods, and the main guarantee principle is that the selected points are uniformly distributed, are not repeated and have moderate quantity.
And determining a selection scheme. In this embodiment, a sampling manner of uniform random distribution is adopted, the selected neighborhood range is a region 5*5, the size of the picture is 1200 x 1200, under the condition that no repetition exists, a picture has 240 x 240 regions 5*5, 240 x 240 pixel points are obtained, the distance between each pixel point and the adjacent pixel points on the upper, lower, left and right sides of the pixel point is 5, the distribution of the pixel points covers the whole picture, the distribution is uniform, and the principles of uniform distribution, non-repetition and moderate quantity of the selected points are satisfied.
The vector formed by the selected pixel points and the neighborhood thereof forms a vector set, which can be regarded as a matrix, and each column of the matrix represents the vector of one pixel point. The operation steps are to compose the 25-dimensional matrix of 240X 240 pixels into a matrix X of 25 rows and 240X 240 columns.
The dimension of the pixel point vector is reduced. The vectors are usually linearly related, and in order to reduce the dimension of the vectors, the eigenvectors of the matrix are found. The specific operation steps are as follows:
and (5) averaging. The 25-dimensional vector of 240X 240 pixels is formed into a matrix X of 25 rows and 240X 240 columns, and zero-averaging is performed on each row of X, i.e., the average value of the row is subtracted.
A covariance matrix is calculated. For the matrix X that has been previously averaged, the covariance matrix C is calculated as follows:
Figure BDA0002322988940000151
a transformation matrix is obtained. And (3) obtaining the characteristic value and the characteristic vector of the C, and arranging the characteristic vector into a new matrix according to the characteristic value from top to bottom to obtain a transformation matrix K which needs to be found, wherein the dimension of the matrix is 25 x 25.
The transform is applied to all pixels. The K obtained in the previous step is obtained from part of the points of a picture, and this matrix K is considered to comprise a linear transformation for all the points in order to reduce the data dimension.
Including all pixel points. For each pixel point of the disparity map, taking the neighborhood range of 5*5 to obtain a vector, namely 1200 x 1200 vectors in total, and for the edge point, supplementing the non-existing neighborhood range to 0.
A new matrix is formed. 25 is the dimension of a vector, 1200X 1200 is the total number of pixels in the whole image, and all vectors are combined into a new matrix x_n of 1200X 1200 columns with a line number of 25.
The transformation is applied. The transform K is applied to X_n according to the following formula:
X_n=K·X_n
the result is a 25 row 1200 by 1200 column matrix, but this matrix should be more sparse than the original x_n or have more values close to 0.
Thinning. In order to achieve the purpose of sparseness, a threshold value of 5 is set, and values smaller than 5 in the result are set to 0, so that nonlinear transformation is introduced into linear variation.
Through the steps, a difference graph consisting of 1200 x 1200 pixels is changed into a 25-row 1200 x 1200-column sparse matrix.
In the embodiment of the invention, as a plurality of pixels in the difference picture are not concerned with a little change, the interesting change is in a large range and stable, and the detected pixels can be regarded as two main types of compositions: the true change points, which have polymerizability and continuity, and the discrete non-change points, require that the change region be separated from the adjusted pixel points. As shown in fig. 4, step S13 searches a target change area from the difference map according to the related information of each pixel point in the difference map and other pixel points in the specified neighborhood range, and performs optimization processing on the target change area, and specifically includes the following steps:
S131, classifying the pixel vectors of each pixel point in the difference map according to the related information of each pixel point in the difference map and other pixel points in the appointed neighborhood range.
The specific implementation flow is as follows: configuring the number of classification categories; calculating the distance between the pixel vector of the current pixel point and the pixel vectors of other pixel points in the difference map; if the calculated minimum distance value is larger than the maximum distance between any two category points, the pixel vector of the current pixel point is divided into a new category, and the two corresponding categories are combined; otherwise, the pixel vector of the current pixel point is classified into the classification category to which the pixel vector having the smallest distance from the current pixel point belongs.
S132, searching the target classification category with the minimum pixel vector in each classification category.
S133, taking the pixel points corresponding to the pixel vectors in the target classification category as target change areas.
In this embodiment, since many pixels in the difference picture are not concerned about a small change, the change of interest is often large and stable, and thus the detected pixels can be regarded as two main types of compositions: having true points of change of polymerizability and continuity, and discrete points of non-change.
The vectors are classified. Through the previous steps, the relevant information of each pixel point and the surrounding area thereof is obtained, and all the pixel points need to be classified according to the information. In theory, all mature and reliable classification methods can be applied in this process. Since this classification problem has no specific label, i.e. it cannot be known which pixel point actually belongs to which classification, only an unsupervised classification method can be applied to the process, and the specific steps are as follows:
the number of categories is determined. The number of classes to be divided is determined in advance, and the number of classes is determined to be 3 because the number of elements in each class is not marked, the number of the selected classes is not more than 5, and the number of the selected classes is determined to be 3.
And carrying out cluster classification. Starting from the vector of the first point, adding a new vector each time to calculate, and calculating the distance between the newly added vector and other vectors, wherein the distance can be L1 distance, L2 distance, histogram distance or the like. In this embodiment, the chi-square distance is selected, and for vector values, the chi-square distance can be written as:
Figure BDA0002322988940000171
where a, b are two vectors, N is the vector dimension, the value is 25, if the smallest distance value is greater than the largest distance between two classes of points, the new vector is divided into a new class and the corresponding two classes are combined, otherwise, the newly added vector is divided into the class to which the vector closest to it belongs.
The category of the result classification is selected. The correct category chosen as the final result often determines how good the final result is, and empirical results indicate that the points of change that really need attention are in the category that is least in number after classification. Therefore, in this embodiment, the classification of the selection result is implemented by using a preset program, that is, the classification set with the least number of vectors in the 3 major classes in the selection result, and the pixel points corresponding to all the vectors in the classification set are considered to be the detected change points.
S134, performing image optimization processing on the target change area.
The resulting target change area may still include some noise interference due to the small, scattered, irregular spots appearing on the image due to the large land structure changes. These are indeed part of the change but do not require attention in the application, so the target change area needs to be image-optimized to be removed.
The specific implementation flow is as follows: scanning pixel points in the target change area sequentially by adopting a scanning unit with a preset size; performing convolution operation on pixel points in a scanning unit by using a first preset matrix, and removing the pixel point at the center position of the scanning unit currently if the convolution result is smaller than or equal to a preset first threshold value; and carrying out convolution operation on pixel points in the scanning unit by using a second preset matrix, and if the convolution result is smaller than a preset second threshold value, resetting pixel information of all the pixel points in the scanning unit according to the pixel points currently positioned at the central position of the scanning unit.
The following explanation of the optimization of the resulting image is implemented by a specific embodiment.
First, the whole picture is scanned by convolution operation to determine whether each pixel should be preserved to remove small noise points.
A convolution kernel is determined. A square matrix with small length is defined as a first preset matrix, and the value of the matrix is 0 or 1.
The method comprises the following steps:
Figure BDA0002322988940000181
the convolution operation removes noise points. And (3) carrying out convolution operation on each 5*5 area of the obtained image, setting the threshold value to be 9, and only if the convolution result is larger than 9, then the center point of the corresponding square matrix is considered to be required to be reserved, otherwise, removing the point.
Second, the entire picture is scanned using a convolution operation to determine if the space around each pixel should be filled to enlarge the region of interest.
A convolution kernel is determined. A square matrix with small length is defined as a second preset matrix, and the value of the matrix is 0 or 1.
The method comprises the following steps:
Figure BDA0002322988940000191
/>
the convolution operation fills in. And (3) carrying out convolution operation on each 5*5 area of the obtained image, setting the threshold value to be 5, and only if the convolution result is smaller than 5, then considering the center point of the corresponding square matrix as needing to be filled, and filling all points corresponding to the rectangle to enlarge the area of interest.
Finally, the region is vectorized. After the removal and enlargement steps, each region in the result is still likely to be irregular, for ease of viewing in actual use, the boundary points of a continuous region are found, and the regions are reassembled into a relatively regular region based on the boundary points.
The embodiment of the invention provides a method for selecting a better difference map according to the color difference, the illumination condition, the shooting angle and the object type required to detect the change of the obtained original image, and realizes the selection of parameters which are as proper as possible according to the picture condition.
The embodiment of the invention provides a process for classifying the pixel points by combining the pixel points and the neighborhood information thereof, and realizes a transformation process for extracting the correlation degree between the pixel points and the field thereof.
The embodiment of the invention realizes the process of classifying the high-dimensional data under the condition of no label according to the high-dimensional information of the pixel points.
The embodiment of the invention provides a follow-up flow of optimizing processing for the result image with noise, and realizes noise elimination and useful region amplification by using the removing and filling operation, so that the practicability is higher.
For the purposes of simplicity of explanation, the methodologies are shown and described as a series of acts, it is to be understood and appreciated by one of ordinary skill in the art that the methodologies are not limited by the order of acts, as some acts may, in accordance with the embodiments of the present invention, occur in other orders and concurrently. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred, and that the acts are not necessarily required in the practice of the invention.
Fig. 5 schematically illustrates a structural diagram of a remote sensing image contrast change detection device according to an embodiment of the present invention. Referring to fig. 5, the remote sensing image contrast change detection device of the embodiment of the present invention specifically includes an extraction module 201, a calculation module 202, and a processing module 203, where:
the extraction module 201 is configured to extract difference maps of two remote sensing images of different time phases in the same region to be compared;
the calculating module 202 is configured to calculate related information between each pixel point in the disparity map and other pixel points in the specified neighborhood range;
and the processing module 203 is configured to search a target change area from the difference map according to related information of each pixel point in the difference map and other pixel points in a specified neighborhood range, and perform optimization processing on the target change area.
In one embodiment of the present invention, the extracting module 201 is specifically configured to extract the characteristic values of the R, G, B three channels of the two remote sensing images respectively; respectively generating gray level pictures corresponding to each remote sensing image according to the average value of the characteristic values of R, G, B channels corresponding to each pixel point in the two remote sensing images; and respectively calculating the difference value of the pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference map of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
In one embodiment of the present invention, the extracting module 201 is specifically configured to extract the characteristic values of the R, G, B three channels of the two remote sensing images respectively; respectively carrying out histogram statistics on the characteristic values of R, G, B channels of each remote sensing image; calculating the histogram distances of the three histograms corresponding to each remote sensing image; according to the size relation of the histogram distances of the three histograms corresponding to each remote sensing image, determining the weight value occupied by the characteristic values of the three channels in R, G, B when the corresponding gray level picture is generated, and generating the gray level picture corresponding to the current remote sensing image according to the determined weight value; and respectively calculating the difference value of the pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference map of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
In one embodiment of the present invention, the extracting module 201 is specifically configured to obtain a target color feature of an object to be detected; extracting characteristic values of R, G, B three channels of two remote sensing images respectively; calculating the difference value of the characteristic value between the color channel corresponding to the target color characteristic and other channels of each pixel point in each remote sensing image; adjusting the weight value occupied by the characteristic value of the color channel corresponding to the target color characteristic of each pixel point in each remote sensing image and the difference value of the characteristic value between the color channel and other channels when the gray level image is generated, and generating the gray level image corresponding to the current remote sensing image according to the adjusted weight value; and respectively calculating the difference value of the pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference map of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
In the embodiment of the present invention, as shown in fig. 6, the computing module 202 includes a configuration unit 2021, a first generating unit 2022, a second generating unit 2023, and a computing unit 2024, where:
a configuration unit 2021, configured to divide a neighborhood range for each pixel point in the disparity map;
a first generating unit 2022, configured to generate a pixel vector of a pixel corresponding to the current neighborhood range according to pixel information of each pixel in each neighborhood range;
a second generating unit 2023, configured to generate a target matrix according to a pixel vector of each pixel point in the disparity map;
and the computing unit 2024 is configured to perform sparsification processing on the target matrix, and embody relevant information of each pixel point in the difference map and other pixel points in a neighborhood range of the pixel point by using the matrix after the sparsification processing.
The calculating unit 2024 is specifically configured to select, according to a preset rule, a target pixel point for change detection from the difference map; generating a source matrix according to the pixel vectors corresponding to each target pixel point; calculating a transformation matrix of a covariance matrix of the source matrix; and carrying out transformation processing on the target matrix according to the transformation matrix.
Further, the calculating unit 2024 is further configured to, after performing a transformation process on the target matrix according to the transformation matrix, find pixel information that is less than a preset threshold value and exists in the transformed matrix; the found pixel information is set to 0.
Further, the calculating unit 2024 is specifically configured to select, in the same pixel row of the difference map, one target pixel point every time by the same number of pixel columns, and select, in the same pixel column, one target pixel point every time by the same number of pixel rows; or dividing the disparity map into a plurality of pixel units according to a preset dividing standard, and selecting the pixel points positioned at the same distribution position in each pixel unit as target pixel points.
Further, the calculating unit 2024 is specifically configured to perform zero-averaging processing on each row in the source matrix; calculating a covariance matrix of the matrix after zero-mean processing, and calculating eigenvalues and eigenvectors of the covariance matrix; and sequentially arranging the eigenvectors from top to bottom according to the order of the eigenvalues to obtain the transformation matrix.
In the embodiment of the present invention, as shown in fig. 7, the processing module 203 specifically includes a classification unit 2031, a search unit 2032, and an optimization processing unit 2033, where:
a classification unit 2031, configured to classify pixel vectors of each pixel point in the difference map according to related information of each pixel point in the difference map and other pixel points in the specified neighborhood range;
A searching unit 2032, configured to search for a target classification class that contains the least pixel vector among the classification classes;
the optimization processing unit 2033 is configured to perform image optimization processing on the target change area by using, as the target change area, a pixel point corresponding to each pixel vector in the target classification category.
In this embodiment, the classifying unit 2031 is specifically configured to configure a category number of classification categories; calculating the distance between the pixel vector of the current pixel point and the pixel vectors of other pixel points in the difference map; if the calculated minimum distance value is larger than the maximum distance between any two category points, the pixel vector of the current pixel point is divided into a new category, and the two corresponding categories are combined; otherwise, the pixel vector of the current pixel point is classified into the classification category to which the pixel vector having the smallest distance from the current pixel point belongs.
In this embodiment, the optimizing unit 2033 is specifically configured to sequentially scan the pixel points in the target change area by using a scanning unit with a preset size; performing convolution operation on pixel points in a scanning unit by using a first preset matrix, and removing the pixel point at the center position of the scanning unit currently if the convolution result is smaller than or equal to a preset first threshold value; and carrying out convolution operation on pixel points in the scanning unit by using a second preset matrix, and if the convolution result is smaller than a preset second threshold value, resetting pixel information of all the pixel points in the scanning unit according to the pixel points currently positioned at the central position of the scanning unit.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The remote sensing image contrast change detection method and device provided by the embodiment of the invention are suitable for detecting the change of complex objects in the building areas of the urban remote sensing images with dense distribution, can automatically search the target change area from the difference image according to the related information of each pixel point in the difference image and other pixel points in the appointed neighborhood range, reduce the manpower consumption, further accurately and efficiently detect the change area from the two remote sensing images, and improve the generalization capability of the change detection method.
Furthermore, embodiments of the present invention provide a computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, implements the steps of the method as described above.
In this embodiment, the module/unit integrated with the remote sensing image contrast change detection device may be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as an independent product. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
The electronic device provided by the embodiment of the invention comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the steps in the embodiment of the contrast change detection method of each remote sensing image are realized when the processor executes the computer program, for example, S11-S13 shown in fig. 1. Alternatively, the processor may implement the functions of the modules/units in the embodiments of the remote sensing image contrast change detection apparatus when executing the computer program, for example, the extracting module 201, the calculating module 202, and the processing module 203 shown in fig. 5.
The computer program may be divided into one or more modules/units, which are stored in the memory and executed by the processor to accomplish the present invention, for example. The one or more modules/units may be a series of computer program instruction segments capable of performing a specific function for describing the execution of the computer program in the remote sensing image contrast variation detecting device. For example, the computer program may be split into an extraction module 201, a calculation module 202 and a processing module 203.
The electronic equipment can be mobile computers, notebooks, palm computers, mobile phones and other equipment. The electronic device may include, but is not limited to, a processor, a memory. Those skilled in the art will appreciate that the electronic device in this embodiment may include more or fewer components, or may combine certain components, or different components, e.g., the electronic device may also include input and output devices, network access devices, buses, etc.
The processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like that is a control center of the electronic device, connecting various parts of the overall electronic device using various interfaces and lines.
The memory may be used to store the computer program and/or modules, and the processor may implement various functions of the electronic device by running or executing the computer program and/or modules stored in the memory and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (Flash Card), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
Those skilled in the art will appreciate that while some embodiments herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments can be used in any combination.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (22)

1. The method for detecting the contrast change of the remote sensing image is characterized by comprising the following steps of:
extracting difference images of two remote sensing images of different time phases in the same region to be compared;
calculating related information of each pixel point in the difference map and other pixel points in a specified neighborhood range; the calculating the related information of each pixel point in the difference map and other pixel points in the appointed neighborhood range comprises the following steps: dividing a neighborhood range for each pixel point in the difference map; generating a pixel vector of a pixel point corresponding to the current neighborhood range according to the pixel information of each pixel point in each neighborhood range; generating a target matrix according to the pixel vector of each pixel point in the difference graph; performing sparsification processing on the target matrix, and embodying relevant information of each pixel point in the difference map and other pixel points in the neighborhood range by using the sparsified matrix;
Searching a target change area from the difference map according to the related information of each pixel point in the difference map and other pixel points in the appointed neighborhood range, and carrying out optimization processing on the target change area; searching a target change area from the difference map according to the related information of each pixel point in the difference map and other pixel points in the appointed neighborhood range, and carrying out optimization processing on the target change area comprises the following steps: classifying the pixel vectors of each pixel point in the difference map according to the related information of each pixel point in the difference map and other pixel points in the appointed neighborhood range; searching target classification categories with the least pixel vectors in the classification categories; taking pixel points corresponding to each pixel vector in the target classification category as a target change area; and carrying out image optimization processing on the target change area.
2. The method according to claim 1, wherein the extracting the difference map of two remote sensing images of different phases in the same region to be compared comprises:
extracting characteristic values of R, G, B three channels of two remote sensing images respectively;
respectively generating gray level pictures corresponding to each remote sensing image according to the average value of the characteristic values of R, G, B channels corresponding to each pixel point in the two remote sensing images;
And respectively calculating the difference value of the pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference map of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
3. The method according to claim 1, wherein the extracting the difference map of two remote sensing images of different phases in the same region to be compared comprises:
extracting characteristic values of R, G, B three channels of two remote sensing images respectively;
respectively carrying out histogram statistics on the characteristic values of R, G, B channels of each remote sensing image;
calculating the histogram distances of the three histograms corresponding to each remote sensing image;
according to the size relation of the histogram distances of the three histograms corresponding to each remote sensing image, determining the weight value occupied by the characteristic values of the three channels in R, G, B when the corresponding gray level picture is generated, and generating the gray level picture corresponding to the current remote sensing image according to the determined weight value;
and respectively calculating the difference value of the pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference map of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
4. The method according to claim 1, wherein the extracting the difference map of two remote sensing images of different phases in the same region to be compared comprises:
Acquiring target color characteristics of an object to be detected;
extracting characteristic values of R, G, B three channels of two remote sensing images respectively;
calculating the difference value of the characteristic value between the color channel corresponding to the target color characteristic and other channels of each pixel point in each remote sensing image;
adjusting the weight value occupied by the characteristic value of the color channel corresponding to the target color characteristic of each pixel point in each remote sensing image and the difference value of the characteristic value between the color channel and other channels when the gray level image is generated, and generating the gray level image corresponding to the current remote sensing image according to the adjusted weight value;
and respectively calculating the difference value of the pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference map of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
5. The method of claim 1, wherein the sparsifying the target matrix comprises:
selecting target pixel points for change detection from the difference map according to a preset rule;
generating a source matrix according to the pixel vectors corresponding to each target pixel point;
calculating a transformation matrix of a covariance matrix of the source matrix;
And carrying out transformation processing on the target matrix according to the transformation matrix.
6. The method of claim 5, wherein after said transforming the target matrix according to the transformation matrix, the method further comprises:
performing sparsification treatment on the matrix after the transformation treatment;
the sparse processing of the matrix after the transformation processing comprises the following steps:
searching pixel information which exists in the matrix after the transformation and is smaller than a preset threshold value;
the found pixel information is set to 0.
7. The method of claim 5, wherein selecting the target pixel for change detection from the disparity map according to a preset rule comprises:
selecting a target pixel point from the same pixel row of the difference map every time with the same number of pixel columns, and selecting a target pixel point from the same pixel row every time with the same number of pixel columns; or (b)
Dividing the disparity map into a plurality of pixel units according to a preset dividing standard, and selecting pixel points positioned at the same distribution position in each pixel unit as target pixel points.
8. The method of claim 5, wherein said calculating a transformation matrix of a covariance matrix of the source matrix comprises:
Zero-equalizing each row in the source matrix;
calculating a covariance matrix of the matrix after zero-mean processing, and calculating eigenvalues and eigenvectors of the covariance matrix;
and sequentially arranging the eigenvectors from top to bottom according to the order of the eigenvalues to obtain the transformation matrix.
9. The method of claim 1, wherein classifying the pixel vector of each pixel in the disparity map according to the information about each pixel in the disparity map and other pixels in the specified neighborhood comprises:
configuring the number of classification categories;
calculating the distance between the pixel vector of the current pixel point and the pixel vectors of other pixel points in the difference map;
if the calculated minimum distance value is larger than the maximum distance between any two category points, the pixel vector of the current pixel point is divided into a new category, and the two corresponding categories are combined;
otherwise, the pixel vector of the current pixel point is classified into the classification category to which the pixel vector having the smallest distance from the current pixel point belongs.
10. The method according to claim 1, wherein the performing image optimization processing on the target change region includes:
Scanning pixel points in the target change area sequentially by adopting a scanning unit with a preset size;
performing convolution operation on pixel points in a scanning unit by using a first preset matrix, and removing the pixel point at the center position of the scanning unit currently if the convolution result is smaller than or equal to a preset first threshold value;
and carrying out convolution operation on pixel points in the scanning unit by using a second preset matrix, and if the convolution result is smaller than a preset second threshold value, resetting pixel information of all the pixel points in the scanning unit according to the pixel points currently positioned at the central position of the scanning unit.
11. A remote sensing image contrast change detection device, the device comprising:
the extraction module is used for extracting difference graphs of two remote sensing images of different time phases in the same region to be compared;
the calculating module is used for calculating the related information of each pixel point in the difference map and other pixel points in the appointed neighborhood range; the computing module comprises: the configuration unit is used for dividing a neighborhood range for each pixel point in the difference graph; the first generation unit is used for generating a pixel vector of the pixel point corresponding to the current neighborhood range according to the pixel information of each pixel point in each neighborhood range; the second generation unit is used for generating a target matrix according to the pixel vector of each pixel point in the difference graph; the computing unit is used for carrying out sparsification on the target matrix, and the matrix after the sparsification is adopted to embody the related information of each pixel point in the difference map and other pixel points in the neighborhood range;
The processing module is used for searching a target change area from the difference graph according to the related information of each pixel point in the difference graph and other pixel points in the appointed neighborhood range of the pixel point, and optimizing the target change area; the processing module comprises a classification unit, a classification unit and a classification unit, wherein the classification unit is used for classifying the pixel vector of each pixel point in the difference graph according to the related information of each pixel point in the difference graph and other pixel points in a specified neighborhood range; the searching unit is used for searching the target classification category with the least pixel vector in each classification category; and the optimization processing unit is used for taking the pixel points corresponding to each pixel vector in the target classification category as a target change area and carrying out image optimization processing on the target change area.
12. The device according to claim 11, wherein the extracting module is specifically configured to extract the feature values of the R, G, B three channels of the two remote sensing images respectively; respectively generating gray level pictures corresponding to each remote sensing image according to the average value of the characteristic values of R, G, B channels corresponding to each pixel point in the two remote sensing images; and respectively calculating the difference value of the pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference map of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
13. The device according to claim 11, wherein the extracting module is specifically configured to extract the feature values of the R, G, B three channels of the two remote sensing images respectively; respectively carrying out histogram statistics on the characteristic values of R, G, B channels of each remote sensing image; calculating the histogram distances of the three histograms corresponding to each remote sensing image; according to the size relation of the histogram distances of the three histograms corresponding to each remote sensing image, determining the weight value occupied by the characteristic values of the three channels in R, G, B when the corresponding gray level picture is generated, and generating the gray level picture corresponding to the current remote sensing image according to the determined weight value; and respectively calculating the difference value of the pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference map of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
14. The apparatus according to claim 11, wherein the extraction module is specifically configured to obtain a target color feature of an object to be detected; extracting characteristic values of R, G, B three channels of two remote sensing images respectively; calculating the difference value of the characteristic value between the color channel corresponding to the target color characteristic and other channels of each pixel point in each remote sensing image; adjusting the weight value occupied by the characteristic value of the color channel corresponding to the target color characteristic of each pixel point in each remote sensing image and the difference value of the characteristic value between the color channel and other channels when the gray level image is generated, and generating the gray level image corresponding to the current remote sensing image according to the adjusted weight value; and respectively calculating the difference value of the pixel points positioned at the same position in the gray level pictures corresponding to the two remote sensing images, and generating a difference map of the two remote sensing images according to the difference value of each pixel point in the two remote sensing images.
15. The apparatus according to claim 11, wherein the computing unit is specifically configured to select a target pixel point for change detection from the disparity map according to a preset rule; generating a source matrix according to the pixel vectors corresponding to each target pixel point; calculating a transformation matrix of a covariance matrix of the source matrix; and carrying out transformation processing on the target matrix according to the transformation matrix.
16. The apparatus according to claim 15, wherein the computing unit is further configured to, after performing a transformation process on the target matrix according to the transformation matrix, find pixel information that is smaller than a preset threshold value and that is present in the transformed matrix; the found pixel information is set to 0.
17. The apparatus according to claim 15, wherein the calculating unit is specifically configured to select one target pixel point every time by the same number of pixel columns in the same pixel row of the disparity map, and select one target pixel point every time by the same number of pixel rows in the same pixel column; or dividing the disparity map into a plurality of pixel units according to a preset dividing standard, and selecting the pixel points positioned at the same distribution position in each pixel unit as target pixel points.
18. The apparatus according to claim 15, wherein the computing unit is configured to perform zero-averaging processing on each row in the source matrix; calculating a covariance matrix of the matrix after zero-mean processing, and calculating eigenvalues and eigenvectors of the covariance matrix; and sequentially arranging the eigenvectors from top to bottom according to the order of the eigenvalues to obtain the transformation matrix.
19. The apparatus according to claim 11, wherein the classification unit is specifically configured to configure a category number of classification categories; calculating the distance between the pixel vector of the current pixel point and the pixel vectors of other pixel points in the difference map; if the calculated minimum distance value is larger than the maximum distance between any two category points, the pixel vector of the current pixel point is divided into a new category, and the two corresponding categories are combined; otherwise, the pixel vector of the current pixel point is classified into the classification category to which the pixel vector having the smallest distance from the current pixel point belongs.
20. The apparatus of claim 11, wherein the optimization processing unit is specifically configured to sequentially scan the pixel points in the target change area by using a scanning unit with a preset size; performing convolution operation on pixel points in a scanning unit by using a first preset matrix, and removing the pixel point at the center position of the scanning unit currently if the convolution result is smaller than or equal to a preset first threshold value; and carrying out convolution operation on pixel points in the scanning unit by using a second preset matrix, and if the convolution result is smaller than a preset second threshold value, resetting pixel information of all the pixel points in the scanning unit according to the pixel points currently positioned at the central position of the scanning unit.
21. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the steps of the method according to any one of claims 1-10.
22. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1-10 when the program is executed.
CN201911305648.9A 2019-12-18 2019-12-18 Remote sensing image contrast change detection method and device, storage medium and electronic equipment Active CN111179230B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911305648.9A CN111179230B (en) 2019-12-18 2019-12-18 Remote sensing image contrast change detection method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911305648.9A CN111179230B (en) 2019-12-18 2019-12-18 Remote sensing image contrast change detection method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111179230A CN111179230A (en) 2020-05-19
CN111179230B true CN111179230B (en) 2023-06-09

Family

ID=70646288

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911305648.9A Active CN111179230B (en) 2019-12-18 2019-12-18 Remote sensing image contrast change detection method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111179230B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112288639A (en) * 2020-07-20 2021-01-29 深圳市智影医疗科技有限公司 Image contrast subtraction method and device, terminal equipment and storage medium
CN111950607A (en) * 2020-07-28 2020-11-17 深圳市元征科技股份有限公司 Reminding method, reminding device and server
CN112464803A (en) * 2020-11-26 2021-03-09 泰康保险集团股份有限公司 Image comparison method and device
CN112560740A (en) * 2020-12-23 2021-03-26 中国水利水电科学研究院 PCA-Kmeans-based visible light remote sensing image change detection method
CN112766264B (en) * 2021-01-25 2024-06-07 广州互联网法院 Picture comparison method, electronic device and computer readable storage medium
CN113724205A (en) * 2021-08-09 2021-11-30 浙江大华技术股份有限公司 Image change detection method, apparatus and storage medium
CN113657338A (en) * 2021-08-25 2021-11-16 平安科技(深圳)有限公司 Transmission state identification method and device, computer equipment and storage medium
CN113554011B (en) * 2021-09-22 2022-01-04 航天宏图信息技术股份有限公司 Method, device and equipment for detecting change of remote sensing image and storage medium
CN114049568A (en) * 2021-11-29 2022-02-15 中国平安财产保险股份有限公司 Object shape change detection method, device, equipment and medium based on image comparison
CN115164845B (en) * 2022-07-04 2024-04-05 新疆维吾尔自治区环境保护科学研究院 Ecological restoration remote sensing automatic detection device and method
CN116258968B (en) * 2023-05-10 2023-07-18 蒙阴县国有林场总场(蒙阴县国有岱崮林场) Method and system for managing fruit diseases and insects
CN116758067B (en) * 2023-08-16 2023-12-01 梁山县成浩型钢有限公司 Metal structural member detection method based on feature matching

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103473755A (en) * 2013-09-07 2013-12-25 西安电子科技大学 SAR image sparsing denoising method based on change detection
CN104091151A (en) * 2014-06-30 2014-10-08 南京信息工程大学 Vehicle identification method based on Gabor feature extraction and sparse representation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103473755A (en) * 2013-09-07 2013-12-25 西安电子科技大学 SAR image sparsing denoising method based on change detection
CN104091151A (en) * 2014-06-30 2014-10-08 南京信息工程大学 Vehicle identification method based on Gabor feature extraction and sparse representation

Also Published As

Publication number Publication date
CN111179230A (en) 2020-05-19

Similar Documents

Publication Publication Date Title
CN111179230B (en) Remote sensing image contrast change detection method and device, storage medium and electronic equipment
CN111192239B (en) Remote sensing image change area detection method and device, storage medium and electronic equipment
US10803554B2 (en) Image processing method and device
CN109886997B (en) Identification frame determining method and device based on target detection and terminal equipment
US9865063B2 (en) Method and system for image feature extraction
KR101618996B1 (en) Sampling method and image processing apparatus for estimating homography
CA2840436A1 (en) System for mapping and identification of plants using digital image processing and route generation
CN109977952B (en) Candidate target detection method based on local maximum
CN116012364B (en) SAR image change detection method and device
CN111199188B (en) Pixel processing method, device, storage medium and equipment of remote sensing image difference map
CN110598696B (en) Medical image scanning and positioning method, medical image scanning method and computer equipment
CN114719966A (en) Light source determination method and device, electronic equipment and storage medium
Li et al. Unsupervised road extraction via a Gaussian mixture model with object-based features
CN111882565B (en) Image binarization method, device, equipment and storage medium
CN111709876B (en) Image splicing method, device, equipment and storage medium
CN108960247B (en) Image significance detection method and device and electronic equipment
CN112053371A (en) Water body extraction method and device in remote sensing image
CN117011304A (en) Defect detection method, defect detection device, computer equipment and computer readable storage medium
Stein et al. Handling uncertainties in image mining for remote sensing studies
Touya et al. Comparing image-based methods for assessing visual clutter in generalized maps
CN113269752A (en) Image detection method, device terminal equipment and storage medium
CN111062341B (en) Video image area classification method, device, equipment and storage medium
CN116977671A (en) Target tracking method, device, equipment and storage medium based on image space positioning
CN116310899A (en) YOLOv 5-based improved target detection method and device and training method
CN115760578A (en) Image processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant