CN115239698A - Change detection method and system based on multi-level feature fusion of subdivision grid images - Google Patents

Change detection method and system based on multi-level feature fusion of subdivision grid images Download PDF

Info

Publication number
CN115239698A
CN115239698A CN202210998313.5A CN202210998313A CN115239698A CN 115239698 A CN115239698 A CN 115239698A CN 202210998313 A CN202210998313 A CN 202210998313A CN 115239698 A CN115239698 A CN 115239698A
Authority
CN
China
Prior art keywords
image
grid
change
model
feature extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210998313.5A
Other languages
Chinese (zh)
Inventor
杜子聪
司艳红
魏俊彪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Yunyao Shenzhen Technology Co ltd
Original Assignee
Zhongke Yunyao Shenzhen Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongke Yunyao Shenzhen Technology Co ltd filed Critical Zhongke Yunyao Shenzhen Technology Co ltd
Priority to CN202210998313.5A priority Critical patent/CN115239698A/en
Publication of CN115239698A publication Critical patent/CN115239698A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Remote Sensing (AREA)
  • Astronomy & Astrophysics (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a change detection method and a system based on multi-level feature fusion of a subdivision grid image, which comprises the following steps: s1, acquiring images of different time phases of a to-be-detected area; s2, training a grid model for extracting the model and subdividing the image according to the features with different image sizes; s3, performing mesh subdivision on the current image to generate a plurality of mesh units with unique mesh codes; s4, generating a 0 change matrix, and extracting a plurality of image characteristics of different time phases; the method can effectively fuse multi-level multi-models by taking the change matrix as a substrate, self-adapting model selection is carried out according to the fragment sizes of different images, a CNN (CNN) model with a larger effective receptive field is selected for a large-size image, spectral difference analysis or histogram difference algorithm is selected for a small-size image, the extracted characteristic values are mapped through the positions of the grid codes and the change matrix, and the characteristic values are superposed into the area blocks of the change matrix, so that the effective fusion of the multi-level multi-models is realized.

Description

Change detection method and system based on multi-level feature fusion of subdivision grid images
Technical Field
The invention relates to the technical field of remote sensing change detection, in particular to a change detection method and system based on multi-level feature fusion of a subdivision grid image.
Background
The remote sensing change detection technology is a technology for judging whether a ground object in the same block area on a multi-temporal remote sensing image changes or not, and even detecting how the ground object changes. The multi-temporal remote sensing image covering the same area is processed through manual and computer assistance, change information in the image is extracted accurately and quickly, dynamic monitoring of changed ground features and analysis of the surface change trend and the evolution rule of the surface change trend are achieved, and the method plays an extremely important role in various fields such as city expansion, land utilization change, forest vegetation change, ecological environment monitoring and disaster monitoring.
In the existing image remote sensing change detection technology, two solutions are mainly provided:
one method is to obtain change information by comparing the spectral difference of two images, and the method is suitable for images with better geometric registration and smaller size;
the other method is that the CNN convolutional neural network extracts image features, and the method is large in effective receptive field and suitable for images with large image sizes.
The above two solutions also have the drawback that:
1. taking the first image remote sensing change detection technology as an example, in the process of carrying out image remote sensing change detection, the accuracy is established under the condition that the geometric registration of the image is intact, the change detection effect in a small range is better, but the application range is low, and the image with larger size cannot be effectively detected;
2. taking the second image remote sensing change detection technology as an example, when image features are obtained, although the effective receptive field is large, in a remote sensing image, many targets are small targets, only dozens of or even a few pixels, and for change detection of small targets, the posing layer of the CNN further reduces the information amount, so that the dimensionality is too low to distinguish.
In conclusion, the two methods have respective advantages and disadvantages, and the problem of low accuracy of single model change detection and identification under different scales exists in the current image remote sensing change detection technology.
Therefore, a change detection method and a change detection system based on multi-level feature fusion of the split grid image are provided.
Disclosure of Invention
In view of this, embodiments of the present invention are intended to provide a method and a system for detecting a change based on multi-level feature fusion of a split grid image, so as to solve or alleviate technical problems in the prior art, and to provide at least one useful choice.
The technical scheme of the embodiment of the invention is realized as follows: the change detection method based on the multilevel feature fusion of the subdivision grid image comprises the following steps:
s1, acquiring images of different time phases of a to-be-detected area;
s2, training a grid model for extracting the model and subdividing the image according to the features of different image sizes;
s3, performing mesh subdivision on the current image to generate a plurality of mesh units with unique mesh codes;
s4, generating a 0 change matrix, and extracting a plurality of image characteristics of different time phases;
s5, importing the image into a feature extraction model, and judging a grid model required to be subdivided by the current to-be-detected region image;
and S6, mapping the extracted characteristic values with the positions of the grid codes and the change matrix, and superposing the characteristic values to the area blocks of the change matrix to realize effective fusion of the multi-level and multi-model.
Further preferred is: the feature extraction model comprises a CNN feature extraction model and a variation difference extraction model;
when the feature extraction model is extracted, the patch size of each hierarchical grid is set in a user-defined mode;
the CNN feature extraction model is used for extracting features of large-size images, such as images with the hierarchy of GeoSOT mesh subdivision codes smaller than or equal to 20;
the change difference extraction model is used for extracting change differences of small-size images, for example, images coded by GeoSOT mesh subdivision, and when the change difference extraction model is used for extracting the images, the change difference extraction model can adopt a plurality of algorithm fusion for extraction, for example, the change difference extraction can be carried out by adopting any one of a spectral difference mode or a histogram mode.
Further preferably: when the image of the area to be detected is subjected to image splitting through the grid model, the method comprises the following steps:
s31, mesh generation is carried out on the current image through a coding unit, and a plurality of mesh units with unique mesh codes are generated;
s32, initializing a task stack: acquiring a grid set of a patch corresponding to the minimum outsourcing rectangle of the image;
s33, calculating the size of the change matrix to generate a 0 change matrix;
and S34, specifying an ending level.
Further preferred is: when the image features are subjected to feature extraction, the method comprises the following steps:
s41, extracting the grid codes one by one from the task stack to obtain the area images corresponding to the grid codes;
s42, calculating a plurality of image characteristics of two different time phases;
and S43, calculating the image characteristic value by multi-model mixing.
Further preferred is: after the feature extraction is finished, the method further comprises the step of updating the change matrix, wherein the step of updating the change matrix comprises the following steps:
s51, after the image characteristic values are mixed and calculated, mapping the grid codes to the areas where the change matrixes are located, and carrying out weighted summation on the characteristic values;
s52, judging whether the end level is reached:
if so: the flow advances to step S53;
if not: the process advances to step S54;
s53, entering the sub-level grid codes into the task stack, leaving the current grid codes out of the task stack, and returning to the step S41;
s54, the current grid code is pulled out of a task stack;
s55, judging whether the task stack is empty:
if not: the step S41 is entered;
if so: and entering the next step.
Further preferred is: and after the updating of the change matrix is finished, threshold segmentation is further included, and the threshold segmentation includes the segmentation of the image according to the change matrix.
Further preferably: when the feature extraction model is used for feature extraction, the following formula is adopted:
Figure BDA0003806531390000031
Figure BDA0003806531390000041
f 3 (g)=P(g)
wherein: g is a trellis code, g 1 、g 2 Are the images of the two different time phases of the trellis code, f 1 At least more than two CNN convolutional neural network model judgments g 1 、g 2 Confidence of class, f 2 G is calculated by combining at least more than 2 traditional image processing algorithms 1 、g 2 W is the weight of a characteristic coefficient, similarity is a similarity characteristic value, and confidence is the confidence of the neural network judgment grid type; f. of 3 Is the probability that the trellis code obtained from other data sources may change;
when the image belongs to a large-size image, for example: subdividing the level of coding using a GeoSOT trellis, using a convolutional neural network f when the current level of the trellis is between 0 and 20 1 To perform feature extraction;
when the image belongs to the small rulerSize images, for example: using GeoSOT mesh to subdivide the coded hierarchy, and using a traditional visual algorithm f when the current hierarchy of the mesh is 20-31 2 To perform feature extraction;
the a priori knowledge f can be used at any level 3 And (5) assisting to correct the recognition result.
A change detection system based on multi-level feature fusion of a subdivision grid image comprises:
the acquisition module is used for acquiring images of different time phases of a to-be-detected area;
the encoding module is used for meshing the current image and generating a plurality of mesh units with unique mesh codes;
the identification module is used for identifying the size information of the current image and selecting different feature extraction models for feature extraction according to the image size information;
the extraction module is used for extracting a plurality of image characteristics of different time phases;
and the detection module is used for mapping the extracted characteristic value with the position of the change matrix through the grid code, and superposing the characteristic value to the region block of the change matrix to realize effective fusion of the multi-level and multi-model.
A computer device comprising a processor, a memory coupled to the processor, having stored therein program instructions that, when executed by the processor, cause the processor to perform the steps of the variation detection method based on a hierarchical feature fusion of a split mesh image as described above.
A storage medium stores program instructions that enable a change detection method based on hierarchical feature fusion of split mesh images as described above.
Due to the adoption of the technical scheme, the embodiment of the invention has the following advantages:
the method takes a change matrix as a substrate, can effectively fuse multi-level multi-models, can self-adapt model selection according to the fragment sizes of different images, selects a CNN (convolutional neural network) model with a larger effective receptive field for large-size images, selects a spectrum difference analysis or histogram difference algorithm for small-size images, maps the extracted characteristic values with the positions of a grid code and the change matrix, and superposes the characteristic values into the area blocks of the change matrix, thereby realizing the effective fusion of the multi-level multi-models;
the invention can correct the change detection result through the prior knowledge, and can obtain some prior knowledge by obtaining the external data source through the unique grid code index, thereby achieving the purpose of assisting in correcting the change result.
The foregoing summary is provided for the purpose of description only and is not intended to be limiting in any way. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features of the present invention will be readily apparent by reference to the drawings and the following detailed description.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the embodiments or technical descriptions will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a block diagram of the grid change detection overview of the present invention;
FIG. 3 is a time phase 1 image of an image with the same latitude and longitude range and different time phases according to an embodiment of the present invention;
FIG. 4 is a time phase 2 image of two images with the same latitude and longitude range and different time phases according to an embodiment of the present invention;
fig. 5 is a schematic diagram illustrating geographic information of an image minimum-outsourcing rectangular patch according to a second embodiment of the present disclosure;
FIG. 6 is a schematic diagram illustrating an update of the affine transformation matrix of the image shown in FIG. 5 according to a second embodiment of the present invention;
FIG. 7 is a first schematic diagram of two time phase images captured by two quadrangle coordinates according to an embodiment of the present invention;
FIG. 8 is a second schematic diagram of two time-phase images captured by coordinates of two four corners according to the embodiment of the present invention;
FIG. 9 is a diagram illustrating the addition of eigenvalues to the assigned positions of the variation matrix according to a second embodiment of the present invention;
FIG. 10 is a schematic diagram of a second sub-level 17 level area of a second embodiment of the present invention;
fig. 11 is a schematic view of a variation matrix according to a second embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The term "plurality" in this application means at least two, e.g., two, three, etc., unless explicitly limited otherwise. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Example one
As shown in fig. 1-2, an embodiment of the present invention provides a change detection method based on multi-level feature fusion of a split grid image, including the following steps:
s1, acquiring images of different time phases of a to-be-detected area;
s2, training a grid model for extracting the model and subdividing the image according to the features with different image sizes;
the feature extraction model comprises a CNN feature extraction model and a variation difference extraction model;
when the feature extraction model is extracted, the patch size of each hierarchical grid is set in a user-defined mode;
the CNN feature extraction model is used for extracting features of large-size images, such as images with the GeoSOT meshing coding level being less than or equal to 20;
the change difference extraction model is used for extracting change differences of small-size images, for example, images coded by GeoSOT mesh subdivision, and when the change difference extraction model is used for extracting the images, the change difference extraction model can adopt a plurality of algorithm fusion for extraction, for example, the change difference extraction can be carried out by adopting any one of a spectral difference or a histogram;
when the feature extraction model is used for feature extraction, the following formula is adopted:
Figure BDA0003806531390000071
Figure BDA0003806531390000072
f 3 (g)=P(g)
wherein: g is a trellis code, g 1 、g 2 Are the images of the grid code at two different time phases, f 1 At least more than two CNN convolutional neural network model judgment g 1 、g 2 Confidence of class, f 2 G is calculated by combining at least more than 2 traditional image processing algorithms 1 、g 2 W is the feature coefficient weight,similarity is a similarity characteristic value, and confidence is a confidence coefficient of the grid type judged by the neural network; f. of 3 Is the probability that the trellis code obtained from other data sources may change;
when the image belongs to a large-size image, for example: subdividing the level of coding using a GeoSOT trellis, using a convolutional neural network f when the current level of the trellis is between 0 and 20 1 To perform feature extraction;
when the image belongs to a small-sized image, for example: using GeoSOT mesh to subdivide the coded hierarchy, and using a traditional visual algorithm f when the current hierarchy of the mesh is 20-31 2 To perform feature extraction;
a priori knowledge f can be used at any level 3 The recognition result is corrected in an auxiliary mode;
s3, performing mesh subdivision on the current image to generate a plurality of mesh units with unique mesh codes;
when the image of the area to be detected is subjected to image dissection through the grid model, the method comprises the following steps of:
s31, mesh generation is carried out on the current image through a coding unit, and a plurality of mesh units with unique mesh codes are generated;
s32, initializing a task stack: acquiring a mesh set of a patch corresponding to the minimum outsourcing rectangle of the image;
s33, calculating the size of the change matrix to generate a 0 change matrix;
s34, specifying an end level;
s4, generating a 0 change matrix, and extracting a plurality of image characteristics of different time phases;
when the image features are subjected to feature extraction, the method comprises the following steps:
s41, extracting the grid codes from the task stack one by one, and acquiring the area images corresponding to the grid codes;
s42, calculating a plurality of image characteristics of two different time phases;
s43, calculating image characteristic values through multi-model mixing;
s5, importing the image into a feature extraction model, and judging a mesh model which needs to be subdivided by the image of the current region to be detected;
after the characteristic extraction is finished, the method also comprises a step of updating the change matrix, wherein the step of updating the change matrix comprises the following steps:
s51, after the image characteristic values are mixed and calculated, mapping the grid codes to the areas where the change matrixes are located, and carrying out weighted summation on the characteristic values;
s52, judging whether the end level is reached:
if so: the flow advances to step S53;
if not: the process advances to step S54;
s53, entering the sub-level trellis code into a task stack, and exiting the current trellis code from the task stack, and returning to the step S41;
s54, the current grid code is pulled out of a task stack;
s55, judging whether the task stack is empty:
if not: entering step S41;
if so: entering the next step;
after the updating of the change matrix is finished, threshold segmentation is further included, and the threshold segmentation includes the segmentation of the image according to the change matrix;
and S6, mapping the extracted characteristic values with the positions of the grid codes and the change matrix, and superposing the characteristic values to the area blocks of the change matrix to realize effective fusion of the multi-level and multi-model.
A change detection system based on multi-level feature fusion of a split grid image comprises:
the acquisition module is used for acquiring images of different time phases of a to-be-detected area;
the encoding module is used for meshing the current image and generating a plurality of mesh units with unique mesh codes;
the identification module is used for identifying the size information of the current image and selecting different feature extraction models for feature extraction according to the image size information;
the extraction module is used for extracting a plurality of image characteristics of different time phases;
and the detection module is used for mapping the extracted characteristic value to the position of the change matrix through the grid code, and superposing the characteristic value to the area block of the change matrix to realize the effective fusion of the multi-level and multi-model.
A computer device comprising a processor, a memory coupled to the processor, having stored therein program instructions which, when executed by the processor, cause the processor to perform the steps of the change detection method based on hierarchical feature fusion of a subdivided mesh image as described above.
The processor is called a CPU. The processor is an integrated circuit chip having signal processing capabilities.
The processor may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
A storage medium stores program instructions capable of implementing the change detection method based on multi-level feature fusion of split grid images as described above, and the storage medium of the embodiment of the present application stores program instructions capable of implementing all the methods described above, where the program instructions may be stored in the storage medium in the form of a software product, and include several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the method described in each embodiment of the present application.
And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or various media capable of storing program codes, or a computer device such as a computer, a server, a mobile phone, or a tablet. The server may be an independent server, or may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), and a big data and artificial intelligence platform.
Example two
As shown in fig. 1 to 11, the present invention further provides an embodiment of remote sensing image detection according to a method of the embodiment, including the following steps:
the method comprises the following steps: parameter preparation
As shown in fig. 3-4, two images with the same latitude and longitude range and different time phases are prepared.
Initializing a task stack: the task stack is a linear table allowing insertion or deletion only at one end, and has a last-in first-out structural characteristic. Firstly, a grid set of a patch corresponding to the minimum outsourcing rectangle of the two-time-phase image is obtained, namely a grid code set of one level is found to exactly divide the image into four parts, and the grid set is used for initializing and inserting a task stack. The trellis code for the smallest bounding rectangle corresponding patch of a given image in the example is [ '417394478127513600-16', '417394486714848192-16', '417394491012415488-16', '417394499602350080-16']The corresponding geographical location information is shown in fig. 5. Taking the level of the grid code of the patch corresponding to the minimum outsourcing rectangle of the image as the starting level start The user specifies the hierarchy as the end hierarchy, lee lend Wherein level start <level end
Generating a change matrix: the change matrix is used for describing the change conditions of two different time phase images, the size of the generated change matrix is the same as the coverage range of the grid code of the patch corresponding to the minimum outsourcing rectangle, the initial state of the change matrix is an all-0 matrix, and the following steps are required for calculating the size of the change matrix:
an affine transformation matrix of the change matrix is calculated. Firstly, calculating the longitude and latitude of the upper left corner of the minimum image outsourcing rectangular patch mesh set, as shown in fig. 6, updating an image affine transformation matrix GeoTransform (113.04222222222222, 9.999999999993514e-06,0.0, 23.6833333333334, 0.0, -9.9999999992915 e-06), wherein the meaning of each bit parameter of the affine transformation matrix is as follows:
GeoTransform [0]: longitude coordinates of the upper left corner of the image;
GeoTransform [1]: the image east-west resolution;
GeoTransform [2]: rotation angle, which is 0 if the image is north up;
GeoTransform [3]: latitude coordinates of the upper left corner of the image;
GeoTransform [4]: rotation angle, if image north is up, the value is 0;
GeoTransform [5]: the south-north direction resolution ratio of the image;
and calculating the size of the change matrix. According to the affine transformation matrix and the longitude and latitude of the lower right corner, a formula for calculating the size of the transformation matrix is as follows.
Figure BDA0003806531390000111
Figure BDA0003806531390000112
Wherein, img weight Is the pixel width of the variation matrix, img height Is the pixel height, x, of the change matrix r Is the lower right corner longitude, y b Is the latitude of the lower right corner.
Step two: feature extraction
And (3) taking the grid codes 417394478127513600-16 at the top of the stack one by one from the task stack, and calculating the longitude and latitude four-corner coordinates of the grid codes at the top of the stack (the longitude of the lower left corner: 113.04222222222222222222, the latitude of the lower left corner: 23.6666666666668, the longitude of the upper right corner: 113.05 and the latitude of the upper right corner: 23.6755555555555555555).
And intercepting the change characteristics of two different time phase images (such as figures 7-8) according to the longitude and latitude four-corner coordinates of the grid code. Note that the level of the trellis code is 16 levels at this time, and the image smaller than 20 levels can be regarded as a large-size image, and the large-size image can be characterized by using the CNN convolutional neural network. Whereas images larger than 20 levels are considered to be small-sized images, spectral feature extraction or histogram feature extraction may be used.
Figure BDA0003806531390000113
Figure BDA0003806531390000114
Wherein g is a trellis code, g 1 、g 2 Are the images of the grid code at two different time phases, f 1 At least more than two CNN convolutional neural network model judgments g 1 、g 2 Confidence of class, f 2 G is calculated by combining at least more than 2 traditional image processing algorithms 1 、g 2 W is the weight of a characteristic coefficient, similarity is a similarity characteristic value, and confidence is the confidence of the neural network judgment grid type; f. of 3 Is the probability that the trellis code obtained from other data sources may change. When the current level of the grid is between 0 and 20, a convolutional neural network f is used 1 To perform feature extraction; when the current level of the grid is 20-31 levels, using the traditional visual algorithm f 2 Feature extraction is performed. A priori knowledge f can be used at any level 3 And (5) assisting to correct the recognition result.
Calculate the position of the current trellis code, i.e., 417394478127513600-16, in the change matrix, since the current level is 16 levels, using method f 1 Performing feature extraction (if the hierarchy is greater than 20, the feature value extraction method uses f 2 ) Then using an averaging strategy to calculate a plurality of f 1 Characteristic of, i.e. for f 1 Multiple sets of characteristic values f 11 ,f 12 ,……,f 1n Weighted averaging (f if level is greater than 20) 2 ) The eigenvalue H (x) is added to the position of the change matrix, as is the position of the lower left region in fig. 9.
Figure BDA0003806531390000121
Figure BDA0003806531390000122
Adding a sub-trellis code to update the task stack: after the grid codes 417394478127513600-16 are updated in the change matrix, if the current grid code level does not reach the specified end level, calculating the sub-level grid codes of the grid codes 417394478127513600-16 [ "417394478127513600-17", "417394470129201255424-17", "417394480274997248-17" and "417394481348772-17" ], adding the sub-grid codes to the top of the task stack as shown in the lower left corner area of FIG. 10, and popping the grid codes 417394478127513600-39016 out of the stack. And at the moment, the task stack top grid is 17-level sub-grid codes 417394478127513600-17, the stack top grid codes are taken out, the operation of the step two is repeated, the sub-grid codes 417394478127513600-17 are subjected to feature extraction, and the sub-level grid codes are calculated after the feature extraction until the level specified by the user is reached.
Step three: threshold segmentation
Image segmentation: as shown in fig. 11, the darker the image, the more likely the change is, and the purpose of image segmentation is achieved by setting a threshold or mathematical morphology.
A multilevel grid change detection method based on subdivision grids comprises the following steps: different algorithm models can be used for feature extraction according to different size images, CNN can be used for feature extraction for large size images with the level less than 20, and spectral differences or histogram extraction variation differences can be used for extraction.
The feature fusion method based on the change matrix comprises the following steps: and through the one-to-one mapping of each grid code and the change matrix, the grid images are superposed into the change matrix after change characteristics are extracted one by one. The fusion of multi-level and multi-model is realized by superposing the variation characteristic values of different models of different levels
In the several embodiments provided in the present application, it should be understood that the disclosed terminal, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit. The above are only embodiments of the present application, and not intended to limit the scope of the present application, and all equivalent structures or equivalent processes performed by the present application and the contents of the attached drawings, which are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (8)

1. The change detection method based on the multilevel feature fusion of the subdivision grid image is characterized by comprising the following steps of:
s1, acquiring images of different time phases of a to-be-detected area;
s2, training a grid model for extracting the model and subdividing the image according to the features of different image sizes;
s3, performing mesh generation on the current image to generate a plurality of mesh units with unique mesh codes;
s4, generating a 0 change matrix, and extracting a plurality of image characteristics of different time phases;
s5, importing the image into a feature extraction model, and judging a mesh model which needs to be subdivided by the image of the current region to be detected;
and S6, mapping the extracted characteristic values with the positions of the grid codes and the change matrix, and superposing the characteristic values to the region blocks of the change matrix to realize effective fusion of the multi-level and multi-model.
2. The change detection method based on the hierarchical feature fusion of the split grid image according to claim 1, characterized in that: the feature extraction model comprises a CNN feature extraction model and a variation difference extraction model;
when the feature extraction model is extracted, the patch size of each hierarchical grid is set in a user-defined mode;
the CNN feature extraction model is used for extracting features of a large-size image;
the change difference extraction model is used for extracting change differences of small-size images, and when the change difference extraction model is used for extracting the images, the change difference extraction is carried out in any mode of spectral differences or histograms.
3. The change detection method based on multilevel feature fusion of the split grid image according to claim 1, characterized in that: when the image of the area to be detected is subjected to image splitting through the grid model, the method comprises the following steps:
s31, mesh generation is carried out on the current image through a coding unit, and a plurality of mesh units with unique mesh codes are generated;
s32, initializing a task stack: acquiring a grid set of a patch corresponding to the minimum outsourcing rectangle of the image;
s33, calculating the size of the change matrix to generate a 0 change matrix;
and S34, designating an ending level.
4. The change detection method based on the hierarchical feature fusion of the split grid image according to claim 1, characterized in that: when the image features are subjected to feature extraction, the method comprises the following steps:
s41, extracting the grid codes from the task stack one by one, and acquiring the area images corresponding to the grid codes;
s42, calculating a plurality of image characteristics of two different time phases;
and S43, calculating the image characteristic value by multi-model mixing.
5. The change detection method based on multilevel feature fusion of the split grid image according to claim 4, characterized in that: after the feature extraction is finished, the method further comprises the step of updating the change matrix, wherein the step of updating the change matrix comprises the following steps:
s51, after the image characteristic values are mixed and calculated, mapping the grid codes to the areas where the change matrixes are located, and carrying out weighted summation on the characteristic values;
s52, judging whether the end level is reached:
if so: the flow advances to step S53;
if not: the process advances to step S54;
s53, entering the sub-level grid codes into the task stack, leaving the current grid codes out of the task stack, and returning to the step S41;
s54, the current grid code is pulled out of a task stack;
s55, judging whether the task stack is empty:
if not: entering step S41;
if so: and entering the next step.
6. The change detection method based on multilevel feature fusion of the split grid image according to claim 5, characterized in that: and after the updating of the change matrix is finished, threshold segmentation is further included, and the threshold segmentation includes the segmentation of the image according to the change matrix.
7. The change detection method based on the hierarchical feature fusion of the split grid image according to claim 2, characterized in that: when the feature extraction model is used for feature extraction, the following formula is adopted:
Figure FDA0003806531380000021
Figure FDA0003806531380000022
f 3 (g)=P(g)
wherein: g is a trellis code, g 1 、g 2 Are the images of the two different time phases of the trellis code, f 1 At least more than two CNN convolutional neural network model judgment g 1 、g 2 Confidence of class, f 2 G is calculated by combining at least more than 2 traditional image processing algorithms 1 、g 2 W is the weight of the feature coefficient, similarity is the feature value of the similarity, and confidence is the confidence of the neural network judging grid type; f. of 3 Is the probability that the trellis code obtained from other data sources may change;
when the image belongs to a large-size image, a convolutional neural network f is used 1 To perform feature extraction;
when the image belongs to a small-size image, using a conventional vision algorithm f 2 To perform feature extraction;
using prior knowledge f in assisting correction of recognition results 3 And (5) assisting to correct the recognition result.
8. A change detection system based on multi-level feature fusion of split mesh images according to any one of claims 1-7, comprising:
the acquisition module is used for acquiring images of different time phases of a to-be-detected area;
the encoding module is used for meshing the current image and generating a plurality of mesh units with unique mesh codes;
the identification module is used for identifying the size information of the current image and selecting different feature extraction models for feature extraction according to the image size information;
the extraction module is used for extracting a plurality of image characteristics of different time phases;
and the detection module is used for mapping the extracted characteristic value with the position of the change matrix through the grid code, and superposing the characteristic value to the region block of the change matrix to realize effective fusion of the multi-level and multi-model.
CN202210998313.5A 2022-08-19 2022-08-19 Change detection method and system based on multi-level feature fusion of subdivision grid images Pending CN115239698A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210998313.5A CN115239698A (en) 2022-08-19 2022-08-19 Change detection method and system based on multi-level feature fusion of subdivision grid images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210998313.5A CN115239698A (en) 2022-08-19 2022-08-19 Change detection method and system based on multi-level feature fusion of subdivision grid images

Publications (1)

Publication Number Publication Date
CN115239698A true CN115239698A (en) 2022-10-25

Family

ID=83681421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210998313.5A Pending CN115239698A (en) 2022-08-19 2022-08-19 Change detection method and system based on multi-level feature fusion of subdivision grid images

Country Status (1)

Country Link
CN (1) CN115239698A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116109523A (en) * 2023-04-11 2023-05-12 深圳奥雅设计股份有限公司 Intelligent design image defect point automatic repairing method and system
CN117554862A (en) * 2024-01-11 2024-02-13 山东康吉诺技术有限公司 Intelligent detection and early warning method and system for transformer

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116109523A (en) * 2023-04-11 2023-05-12 深圳奥雅设计股份有限公司 Intelligent design image defect point automatic repairing method and system
CN116109523B (en) * 2023-04-11 2023-06-30 深圳奥雅设计股份有限公司 Intelligent design image defect point automatic repairing method and system
CN117554862A (en) * 2024-01-11 2024-02-13 山东康吉诺技术有限公司 Intelligent detection and early warning method and system for transformer
CN117554862B (en) * 2024-01-11 2024-03-29 山东康吉诺技术有限公司 Intelligent detection and early warning method and system for transformer

Similar Documents

Publication Publication Date Title
CN115239698A (en) Change detection method and system based on multi-level feature fusion of subdivision grid images
US8660383B1 (en) System and method of aligning images
CN111914686B (en) SAR remote sensing image water area extraction method, device and system based on surrounding area association and pattern recognition
Aytekın et al. Unsupervised building detection in complex urban environments from multispectral satellite imagery
CN108428220B (en) Automatic geometric correction method for ocean island reef area of remote sensing image of geostationary orbit satellite sequence
AU2016315938A1 (en) Systems and methods for analyzing remote sensing imagery
CN109741446B (en) Method for dynamically generating fine coast terrain by three-dimensional digital earth
CN102282572A (en) Method and system for representing image patches
Sheng et al. Automated image registration for hydrologic change detection in the lake-rich Arctic
US20210027055A1 (en) Methods and Systems for Identifying Topographic Features
CN111680704B (en) Automatic and rapid extraction method and device for newly-increased human active plaque of ocean red line
JP2001143054A (en) Satellite image-processing method
CN112183434B (en) Building change detection method and device
CN113223042A (en) Intelligent acquisition method and equipment for remote sensing image deep learning sample
CN115082322B (en) Image processing method and device, and training method and device of image reconstruction model
CN112541484A (en) Face matting method, system, electronic device and storage medium
CN114494378A (en) Multi-temporal remote sensing image automatic registration method based on improved SIFT algorithm
CN113343945B (en) Water body identification method and device, electronic equipment and storage medium
CN115457408A (en) Land monitoring method and device, electronic equipment and medium
CN107154070B (en) Method and device for superposing vector elements and digital ground model
JP6982242B2 (en) Learning data generator, change area detection method and computer program
CN116416626B (en) Method, device, equipment and storage medium for acquiring circular seal data
CN112836688A (en) Tile image feature extraction method and device, electronic equipment and storage medium
CN116310832A (en) Remote sensing image processing method, device, equipment, medium and product
CN113505650A (en) Method, device and equipment for extracting topographic feature line

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination