CN116958172A - Urban protection and update evaluation method based on three-dimensional space information - Google Patents

Urban protection and update evaluation method based on three-dimensional space information Download PDF

Info

Publication number
CN116958172A
CN116958172A CN202310965690.3A CN202310965690A CN116958172A CN 116958172 A CN116958172 A CN 116958172A CN 202310965690 A CN202310965690 A CN 202310965690A CN 116958172 A CN116958172 A CN 116958172A
Authority
CN
China
Prior art keywords
connected domain
target
pair
pairs
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310965690.3A
Other languages
Chinese (zh)
Other versions
CN116958172B (en
Inventor
白颢
王泽玮
李雄威
吴小曼
曲培元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinjing Hainan Technology Development Co ltd
Original Assignee
Jinjing Hainan Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinjing Hainan Technology Development Co ltd filed Critical Jinjing Hainan Technology Development Co ltd
Priority to CN202310965690.3A priority Critical patent/CN116958172B/en
Publication of CN116958172A publication Critical patent/CN116958172A/en
Application granted granted Critical
Publication of CN116958172B publication Critical patent/CN116958172B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A30/00Adapting or protecting infrastructure or their operation
    • Y02A30/60Planning or developing urban green infrastructure

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image data processing, in particular to a city protection and update evaluation method based on three-dimensional space information, which comprises the following steps: respectively constructing graph structures according to the center point of each region, obtaining a first connected domain and a second connected domain according to the matched node pairs in the two graph structures, and obtaining a connected domain pair according to the first connected domain and the second connected domain; obtaining a target connected domain pair according to the intersection ratio of each connected domain pair and the distribution of adjacent connected domain pairs, and obtaining direction weights according to two connected domains in the target connected domain pair; and carrying out weight distribution according to the direction weights to obtain the directions of the distribution weights, and carrying out improved sift operator registration according to the directions of the distribution weights so as to carry out city protection and updating evaluation based on three-dimensional space information. The invention solves the problem that the sift operator is difficult to match due to different directional deformations when the sift operator is aligned, and the matching effect of the sift operator is better.

Description

Urban protection and update evaluation method based on three-dimensional space information
Technical Field
The invention relates to the technical field of image data processing, in particular to a city protection and update evaluation method based on three-dimensional space information.
Background
In the process of carrying out geospatial data analysis and evaluating the effect of urban protection and updating by utilizing a GIS technology, the influence of different schemes on urban protection and updating is usually revealed by utilizing modes such as visual field analysis, multi-target conflict analysis, land utilization change simulation and the like; in the land utilization change simulation, land change conditions are generally obtained according to the difference between two remote sensing images in different periods; because the two remote sensing images have different corresponding time, the two remote sensing images need to be registered to obtain corresponding characteristic areas, so that the change condition of the ground object is analyzed, and government decision making or city planning is assisted.
In the prior art, two remote sensing images are registered by using a sift operator, so that corresponding characteristic areas are obtained, but the two remote sensing images are different in shooting angle, so that the same area is different in performance on the two remote sensing images, the sift operator is difficult to match due to different direction deformation when being matched, and the matching effect of the sift operator is further affected.
Disclosure of Invention
The invention provides a city protection and update evaluation method based on three-dimensional space information, which aims to solve the existing problems.
The city protection and update evaluation method based on three-dimensional space information adopts the following technical scheme:
one embodiment of the invention provides a city protection and update evaluation method based on three-dimensional space information, which comprises the following steps:
collecting a plurality of remote sensing images and obtaining a matching image and a target image;
respectively acquiring a plurality of areas for the matched image and the target image, respectively constructing graph structures according to the center point pair matched image and the target image of each area, acquiring a plurality of first connected domains of the matched image and a plurality of second connected domains of the target image according to the matched node pairs in the two graph structures, and acquiring a plurality of connected domain pairs in the matched image and the target image according to the first connected domains and the second connected domains;
obtaining the satisfaction degree of each connected domain pair according to the intersection ratio of each connected domain pair and the distribution of the adjacent connected domain pairs, obtaining a target connected domain pair according to the satisfaction degree, and obtaining the direction weight of each direction according to the two connected domains in the target connected domain pair;
and carrying out weight distribution on a plurality of directions according to the direction weights to obtain a plurality of directions with the weight distribution, and carrying out improved sift operator registration according to the directions with the weight distribution so as to carry out city protection and updating evaluation based on three-dimensional space information.
Preferably, the method for respectively acquiring a plurality of areas for the matching image and the target image includes the following specific steps:
and dividing the region of the matching image and the target image through a neural network, respectively inputting the matching image and the target image into the trained neural network, and outputting to obtain a plurality of regions in the matching image and the target image.
Preferably, the image structure is constructed according to the center point of each region for the matching image and the target image respectively, including
The specific method comprises the following steps:
acquiring a center point for each region in the matching image; taking each center point as a node of the graph structure, taking Euclidean distance between any two center points as a boundary value between corresponding nodes, and marking the obtained graph structure as a graph structure of a matched image;
acquiring a center point for each region in the target image; and taking each center point as a node of the graph structure, taking the Euclidean distance between any two center points as a boundary value between corresponding nodes, and marking the obtained graph structure as the graph structure of the target image.
Preferably, the node pairs matched in the two graph structures obtain a plurality of first communication domains and targets of the matched image
The method for the image to be displayed comprises the following specific steps:
obtaining nodes capable of forming node pairs and edges capable of forming edge pairs in the two graph structures through a VF2 algorithm to obtain a plurality of node pairs and edge pairs; marking nodes in the graph structure of the matching image in the node pair as first nodes, marking the area corresponding to the first nodes as first communication domains of the matching image, and obtaining a plurality of first communication domains in the matching image; and marking the node in the graph structure of the target image in the node pair as a second node corresponding to the first node, marking the area corresponding to the second node as a second connected domain of the target image, and obtaining a second connected domain matched with each first connected domain in the target image in the matched image.
Preferably, the matching image and the plurality of connected domains in the target image are obtained according to the first connected domain and the second connected domain
The specific method comprises the following steps:
and marking the first connected domain and the matched second connected domain as a connected domain pair to obtain a plurality of connected domain pairs in the two remote sensing images.
Preferably, each connected domain is obtained according to the intersection ratio of each connected domain pair and the distribution of the adjacent connected domain pairs
The satisfaction degree of the pair comprises the following specific methods:
acquiring the intersection ratio of any section and a plurality of corresponding reference connected domain pairs, and recording the Euclidean distance of the central points of the two first connected domains in the two connected domain pairs and the Euclidean distance average value of the central points of the two second connected domains as the distance between each reference connected domain pair and the connected domain pair; averaging all the distances to obtain a distribution distance of the section;
acquiring the distribution distance of each segment, marking the obtained sequence as the adjacent distribution sequence of the connected domain pair according to ascending order of the distribution distance, and marking the obtained sequence as the adjacent intersection sequence of the connected domain pair according to the corresponding order of the distribution distance; carrying out linear normalization on adjacent distribution sequences, taking the cosine similarity of the normalized sequences and adjacent intersection sequences as the initial satisfaction degree of the connected domain pair, dividing the difference value obtained by subtracting the initial satisfaction degree from 1 by 2 to obtain a quotient, and taking the quotient as the satisfaction degree of the connected domain pair;
and acquiring the satisfaction degree of each connected domain pair.
Preferably, the method for obtaining the intersection ratio of any section and the corresponding plurality of reference connected domain pairs includes the following specific steps:
calculating the cross-over ratio of the two connected domains in each connected domain pair to obtain the cross-over ratio of each connected domain pair; for any one connected domain pair, acquiring a plurality of adjacent connected domain pairs of the connected domain pair, and marking the adjacent connected domain pairs as reference connected domain pairs of the connected domain pair;
acquiring the cross ratio of all reference connected domain pairs of the connected domain pairs, and arranging the cross ratios in ascending order from small to large, wherein the obtained sequence is marked as an adjacent cross ratio sequence of the connected domain pairs; carrying out OTSU multi-threshold segmentation on adjacent cross-over sequences, dividing the adjacent cross-over sequences into a plurality of sections, calculating the average value of the cross-over ratios in each section, and recording the average value as the cross-over ratio of each section; each element in the adjacent cross-over sequence corresponds to a reference connected domain pair, and each section containing a plurality of elements corresponds to a plurality of reference connected domain pairs.
Preferably, the method for obtaining the target connected domain pair according to the satisfaction degree includes the following specific steps:
and taking the product of the intersection ratio and the satisfaction degree of each connected domain pair as the target degree of each connected domain pair, and taking the connected domain pair with the maximum target degree as the target connected domain pair.
Preferably, the method includes obtaining the direction weight of each direction according to two connected domains in the target connected domain pair, including
The body method comprises the following steps:
acquiring a first center point and a second center point;
aligning the first center point with the second center point by translation; the first central point and the second central point which are aligned are marked as target central points, n rays are uniformly made outwards from the target central points, and each ray corresponds to one direction; for any one ray, marking an intersection point of the ray and a first communication domain of a target under superposition as a first intersection point, and marking the Euclidean distance between a target center point and the first intersection point as a first distance in the direction corresponding to the ray; marking the intersection point of the ray and the second communication domain of the target under superposition as a second intersection point, and marking the Euclidean distance between the center point of the target and the second intersection point as a second distance in the direction corresponding to the ray; the ratio of the first distance to the second distance is recorded as the consistency degree of the corresponding directions of the rays; and obtaining the consistency degree of the corresponding directions of each ray, and carrying out softmax normalization on all the consistency degrees, wherein the obtained result is used as the direction weight of each direction.
Preferably, the acquiring the first center point and the second center point includes the following specific methods:
the first communicating domain in the target communicating domain pair is marked as a target first communicating domain, and the center point of the target first communicating domain is marked as a first center point; and (3) marking a second communicating region in the target communicating region pair as a target second communicating region, and marking the center point of the target second communicating region as a second center point.
The technical scheme of the invention has the beneficial effects that: by calculating the connected domain pairs of the two images, a plurality of connected domain pairs which have smaller variation and can be matched on the two images are obtained, and the region with larger variation is screened out; the target degree of each connected domain pair is calculated to obtain a corresponding region with the smallest change in the front period and the rear period, the corresponding region participates in subsequent calculation, and the calculation precision is improved; the direction weight of each direction is obtained through calculation of the overlapping area of the target connected domain pair, and the direction with larger deformation degree is given with smaller direction weight, so that the problem that matching is difficult due to different direction deformation when the sift operator is aligned is solved, and the matching effect of the sift operator is better.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of steps of the city protection and update evaluation method based on three-dimensional space information.
Detailed Description
In order to further describe the technical means and effects adopted by the present invention to achieve the preset purpose, the following detailed description is given below of the city protection and update evaluation method based on three-dimensional space information according to the present invention, and the detailed description is given below of the specific implementation, structure, features and effects thereof. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the urban protection and update evaluation method based on three-dimensional space information provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of steps of a city protection and update evaluation method based on three-dimensional space information according to an embodiment of the present invention is shown, the method includes the following steps:
step S001: and acquiring a plurality of remote sensing images and obtaining a matching image and a target image.
It should be noted that in the prior art, two remote sensing images are usually registered by using a sift operator, so as to obtain a corresponding feature region, so as to analyze the change condition of the ground object and assist government decision or city planning, but because the shooting angles of the two remote sensing images are different, the same region is different in representation on the two remote sensing images, and further the matching effect of the sift operator is affected. Based on the above, the embodiment provides a city protection and update evaluation method based on three-dimensional space information, and the method of giving a smaller similarity weight to a larger deformation direction is used for solving the problem that the sift operator is difficult to match due to different direction deformation when the sift operator is matched, so that the matching effect of the sift operator is better.
Specifically, in order to implement the urban protection and update evaluation method based on three-dimensional space information provided in this embodiment, two remote sensing images of the same region in different periods need to be acquired first, and the specific process is as follows: acquiring a plurality of remote sensing images of the same region in a period of time through a remote sensing satellite system, wherein the embodiment is described in the last 3 years, and the remote sensing satellite system acquires the remote sensing images as a known technology, which is not described in detail in the embodiment; for any two remote sensing images, marking the remote sensing image with earlier time as a matching image, and marking the remote sensing image with later time as a target image; in this embodiment, the two remote sensing images are used as the matching images and the target image.
So far, the matching images and the target images in a plurality of remote sensing images are obtained through the method.
Step S002: and respectively acquiring a plurality of areas for the matched image and the target image, respectively constructing a graph structure according to the center point pair matched image and the target image of each area, and acquiring a plurality of first connected domains of the matched image and a plurality of second connected domains of the target image according to the matched node pairs in the two graph structures to acquire a plurality of connected domain pairs in the matched image and the target image.
It should be noted that, due to the motion reasons of the remote sensing satellites, the remote sensing images in different periods in the same region have different performances, i.e. the remote sensing images in the same region in different periods have different visual angles, and further, due to the distortion of the visual angles, the shapes of the ground objects in the two images have different performances; however, because the deviations in the view angle directions of the remote sensing images are consistent as a whole, the matched images and the target images can be subjected to region division through the neural network, so that each region corresponds to one type of region of the remote sensing images, and the basis is provided for the subsequent deviation degree analysis in different directions through matching the regions of the two images.
It should be further noted that, in the remote sensing images in different periods, there may be a large feature shape change, so that matching calculation cannot be directly performed on all areas according to the clustering condition of the two remote sensing images; but rather to match areas of less variation, so as to avoid larger errors.
Specifically, the matching image and the target image are respectively subjected to region division through a neural network, the neural network adopts a CNN network, the network structure adopts a ResNet network structure, the loss function adopts a cross entropy loss function, a large number of remote sensing images are obtained as training sets, each remote sensing image in the training sets is manually marked with regions of different types, the neural network is trained through the training sets, the matching image and the target image are respectively input into the trained neural network, and a plurality of regions in the matching image and the target image are output and obtained, wherein the neural network is subjected to region division into known techniques, and the description is omitted in the embodiment, namely, the regions of different land areas in the actual city, such as roads, buildings, green belts, inland river regions and the like.
Further, taking the matching image as an example, a center point is obtained for each region in the matching image, and the region obtaining center point is a known technology, which is not described in detail in this embodiment; taking each center point as a node of the graph structure, taking Euclidean distance between any two center points as a boundary value between corresponding nodes, and marking the obtained graph structure as a graph structure of a matched image; obtaining a graph structure of a target image according to the method; after obtaining two graph structures, obtaining nodes capable of forming node pairs and edges capable of forming edge pairs in the two graph structures through a VF2 algorithm, and obtaining a plurality of node pairs and edge pairs for the two graph structures, wherein the VF2 algorithm is a known technology, and is not repeated in the embodiment, and meanwhile, because the two graph structures are remote sensing images of the same area, the number of the matching images is the same as that of the areas in the target images, the number of the nodes is the same, and the matching can be performed through the VF2 algorithm; marking nodes in the graph structure of the matching image in the node pair as first nodes, marking the area corresponding to the first nodes as first communication domains of the matching image, and obtaining a plurality of first communication domains in the matching image; marking the nodes in the graph structure of the target image in the node pair as second nodes corresponding to the first nodes, marking the areas corresponding to the second nodes as second connected domains of the target image, and obtaining second connected domains matched with each first connected domain in the target image in the matched image; and marking the first connected domain and the matched second connected domain as a connected domain pair, so as to obtain a plurality of connected domain pairs in the two remote sensing images.
So far, through carrying out regional division on two remote sensing images according to an actual land parcel, obtaining a plurality of regions matched with the two remote sensing images through graph structure analysis, namely, each first connected region of the matched images and a second connected region of the matched images in the target image, and obtaining a plurality of connected region pairs in the two remote sensing images.
Step S003: and obtaining the target degree of each connected domain pair according to the intersection ratio of each connected domain pair and the distribution of the adjacent connected domain pairs, obtaining a target connected domain pair, and analyzing and obtaining the direction weight of each direction according to the two connected domains in the target connected domain pair.
After the first connected domain of the matching image and the second connected domain of the target image are obtained, the connected domain pair is required to be subjected to target degree quantification, the target degree reflects the change relation of the two connected domains, the smaller the change is, the larger the target degree is, namely, the smaller the change of the two connected domains is in the process of changing the visual angle, the larger target degree is obtained through the connected domain pair with smaller change, and the deviation degree of each direction in the process of changing the visual angle is further analyzed by utilizing the connected domain pair with the largest target degree, so that the interference of the visual angle distortion of the connected domain on the analysis in the process of analyzing the direction deviation degree is avoided; the target degree is obtained through the intersection ratio of the communicating domains and the distribution of adjacent communicating domain pairs of the communicating domains, and the larger the intersection ratio is, the larger the target degree is; meanwhile, for the adjacent connected domain pairs, the closer the connected domains are to the connected domain pair with larger cross-linking, the smaller the possibility of deformation of the connected domain pair is, the larger the satisfaction degree is, and the larger the target degree is.
Specifically, the intersection ratio of two connected domains in each connected domain pair, namely the first connected domain and the second connected domain is calculated, wherein the intersection ratio of each connected domain pair can be obtained when pixel points in the connected domain are in the element of the intersection ratio calculation; taking any one of the communicating domain pairs as an example, acquiring a plurality of adjacent communicating domain pairs of the communicating domain pair, and recording the adjacent communicating domain pairs as reference communicating domain pairs of the communicating domain pair, wherein the adjacent communicating domain pairs represent that two communicating domains have a common edge part, namely the two communicating domains are contacted, and meanwhile, the adjacent communicating domain pairs are defined as a first communicating domain adjacent and a second communicating domain adjacent in the embodiment, so that a plurality of reference communicating domain pairs of the communicating domain pairs are obtained; acquiring the cross-over ratios of all reference connected domain pairs, and arranging the cross-over ratios in ascending order from small to large, wherein the obtained sequence is recorded as an adjacent cross-over ratio sequence of the connected domain pairs; carrying out OTSU multi-threshold segmentation on adjacent cross-over sequences, dividing the adjacent cross-over sequences into a plurality of sections, calculating the average value of the cross-over ratios in each section, and recording the average value as the cross-over ratio of each section;
taking any section as an example, acquiring a plurality of reference connected domain pairs corresponding to the cross-over ratio in the section, calculating the distance between each reference connected domain pair and the connected domain pair, wherein the distance is obtained by averaging the Euclidean distance of the central points of two first connected domains in the two connected domain pairs and the Euclidean distance of the central points of two second connected domains, and obtaining the distance between each reference connected domain pair corresponding to the cross-over ratio in the section, and averaging all the distances to obtain the average value as the distribution distance of the section; according to the method, the distribution distance of each segment is obtained, the distribution distances are arranged in ascending order from small to large, the obtained sequence is marked as an adjacent distribution sequence of the connected domain pair, the intersection ratio of each segment is arranged in the order from small to large according to the corresponding distribution distance, the obtained sequence is marked as an adjacent intersection sequence of the connected domain pair, and the adjacent distribution sequence is consistent with the connected domain pair corresponding to the element at the same position in the adjacent intersection sequence; performing linear normalization on adjacent distribution sequences, namely normalizing elements in the adjacent distribution sequences, taking cosine similarity between the normalized sequences and adjacent intersection sequences as initial satisfaction of the connected domain pair, and normalizing the value domain range of the cosine similarity as the satisfaction of the connected domain pair, wherein the value domain range of the cosine similarity is [ -1,1] to avoid negative value of the initial satisfaction to influence subsequent calculation, and dividing a difference value obtained by subtracting the initial satisfaction from 1 by 2 to obtain quotient, wherein the quotient is taken as the satisfaction degree of the connected domain pair, namely normalizing the value domain range of the cosine similarity; and acquiring the satisfaction degree of each connected domain pair according to the method.
Further, the product of the intersection ratio and the satisfaction degree of each connected domain pair is taken as the target degree of each connected domain pair, the connected domain pair with the largest target degree is taken as the target connected domain pair, the first connected domain in the target connected domain pair is marked as a target first connected domain, and the center point of the target first connected domain is marked as a first center point; and (3) marking a second communicating region in the target communicating region pair as a target second communicating region, and marking the center point of the target second communicating region as a second center point.
It should be further noted that, after the target connected domain pair is obtained, analysis of the degree of direction deviation is performed based on the target connected domain pair, and at this time, deformation of two connected domains in the target connected domain pair is minimum, so that interference of analysis of direction deviation is minimum; by overlapping the two communicating domains according to the two central points, 360 rays are made through the central points, namely, each direction angle corresponds to one ray, according to the distribution of the intersecting points on the two communicating domains on the rays, a first distance and a second distance on the rays are obtained, the smaller the difference between the first distance and the second distance is, the smaller the deformation in the corresponding direction of the rays is, and the larger weight is allocated to provide a basis for the subsequent sift operator.
Specifically, the first center point is aligned with the second center point through translation, and at the moment, the target first communication domain and the target second communication domain are overlapped; the first central point and the second central point after alignment are marked as target central points, n rays are uniformly made outwards from the target central points, each ray corresponds to one direction, n=360 is adopted for description in the embodiment, namely 360-degree directions are uniformly divided into 360 parts; taking any ray as an example, marking an intersection point of the ray and a first communication domain of a target under superposition as a first intersection point, and marking the Euclidean distance between a target center point and the first intersection point as a first distance in the direction corresponding to the ray; marking the intersection point of the ray and the second communication domain of the target under superposition as a second intersection point, and marking the Euclidean distance between the center point of the target and the second intersection point as a second distance in the direction corresponding to the ray; recording the ratio of the first distance to the second distance as the consistency degree of the corresponding directions of the rays, wherein the ratio is obtained by a small value and a large value; obtaining the consistency degree of the corresponding directions of each ray according to the method, and carrying out softmax normalization on all the consistency degrees, wherein the obtained result is used as the direction weight of each direction; the closer the ratio of the first distance to the second distance is to 1, the greater the degree of agreement, indicating that the smaller the deformation in that direction, the greater the weight should be assigned.
The method comprises the steps of obtaining the target degree of the connected domain pairs in a quantification mode according to the intersection ratio and the distribution of adjacent connected domain pairs for each connected domain pair, obtaining the target connected domain pairs, guaranteeing that the connected domain in the target connected domain pairs deforms minimally, reducing interference to direction deformation analysis, and finally obtaining the direction weights of all directions based on the target connected domain pairs.
Step S004: and carrying out weight distribution on a plurality of directions according to the direction weights to obtain a plurality of directions with the weight distribution, and carrying out improved sift operator registration according to the directions with the weight distribution so as to carry out city protection and updating evaluation based on three-dimensional space information.
It should be noted that, the sift operator registration is a known technique, and this embodiment is not described, and the general procedure of the conventional sift operator registration is as follows:
1. acquiring a plurality of corresponding sift descriptor pairs in a reference image and a target image;
2. dividing 360 directions of each sift descriptor in the reference image into 8 direction ranges, taking each sift descriptor as a center point of a subarea, dividing a 4X 4 area, wherein 4X 4 subareas exist in the area, and gradient strength of 8 direction ranges exists in each subarea, so that a plurality of 128-dimensional sift feature vectors are obtained;
3. and then calculating the cosine similarity between the 128-dimensional sift feature vectors, and performing sift registration according to the cosine similarity between the 128-dimensional sift feature vectors to obtain a sift matching result.
Specifically, compared with the traditional sift operator registration, the specific process of performing the sift operator registration in this embodiment is as follows:
1. acquiring a plurality of sift descriptor pairs in a reference image and a target image; the sift descriptors in the reference image and the sift descriptors in the target image have a one-to-one correspondence;
2. equally dividing 360 directions of each sift descriptor in the reference image into 8 direction ranges, wherein each direction range contains 45 directions; taking any direction range as an example, accumulating and marking the direction weights of all directions in the direction range as the weights of the direction range; acquiring weights of all direction ranges;
3. dividing a region with the size of 4 multiplied by 4 by taking each sift descriptor in the reference image as a center point of a sub-region, wherein 4 multiplied by 4 sub-regions exist in the region, and each sub-region has gradient strength in 8 direction ranges, so that 128-dimensional sift feature vectors are obtained; wherein each sift descriptor corresponds to a 128-dimensional sift feature vector, and each dimension sift feature vector corresponds to a direction vector;
4. taking any one sift descriptor as an example, obtaining the similarity of the sift descriptor corresponding sift descriptor pair according to the sift feature vector of 128 dimensions corresponding to the sift descriptor; the method for calculating the similarity of the sift descriptor pair corresponding to the sift descriptor is as follows:
wherein s represents the similarity of the sift descriptor pair corresponding to the sift descriptor; i represents the corresponding sift characteristic of the sift descriptorThe number of dimensions of the vector; p is p i A direction weight representing an ith dimension feature vector; m is m i Representing the gradient strength of the ith dimension feature vector. And obtaining the similarity of all the sift descriptor pairs.
5. And performing sift operator registration according to the similarity of all sift descriptor pairs to obtain an improved sift operator matching result.
And further, city protection and updating evaluation based on three-dimensional space information are carried out according to the improved sift operator matching result.
This embodiment is completed.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (10)

1. The city protection and update evaluation method based on the three-dimensional space information is characterized by comprising the following steps:
collecting a plurality of remote sensing images and obtaining a matching image and a target image;
respectively acquiring a plurality of areas for the matched image and the target image, respectively constructing graph structures according to the center point pair matched image and the target image of each area, acquiring a plurality of first connected domains of the matched image and a plurality of second connected domains of the target image according to the matched node pairs in the two graph structures, and acquiring a plurality of connected domain pairs in the matched image and the target image according to the first connected domains and the second connected domains;
obtaining the satisfaction degree of each connected domain pair according to the intersection ratio of each connected domain pair and the distribution of the adjacent connected domain pairs, obtaining a target connected domain pair according to the satisfaction degree, and obtaining the direction weight of each direction according to the two connected domains in the target connected domain pair;
and carrying out weight distribution on a plurality of directions according to the direction weights to obtain a plurality of directions with the weight distribution, and carrying out improved sift operator registration according to the directions with the weight distribution so as to carry out city protection and updating evaluation based on three-dimensional space information.
2. The city protection and update evaluation method based on three-dimensional space information according to claim 1, wherein the obtaining a plurality of areas for the matching image and the target image respectively comprises the following specific steps:
and dividing the region of the matching image and the target image through a neural network, respectively inputting the matching image and the target image into the trained neural network, and outputting to obtain a plurality of regions in the matching image and the target image.
3. The city protection and update evaluation method based on three-dimensional space information according to claim 1, wherein the constructing a graph structure according to the center point pair matching image and the target image of each region respectively comprises the following specific steps:
acquiring a center point for each region in the matching image; taking each center point as a node of the graph structure, taking Euclidean distance between any two center points as a boundary value between corresponding nodes, and marking the obtained graph structure as a graph structure of a matched image;
acquiring a center point for each region in the target image; and taking each center point as a node of the graph structure, taking the Euclidean distance between any two center points as a boundary value between corresponding nodes, and marking the obtained graph structure as the graph structure of the target image.
4. The city protection and update evaluation method based on three-dimensional space information according to claim 1, wherein the method for obtaining a plurality of first connected domains of the matched image and a plurality of second connected domains of the target image according to the matched node pairs in the two graph structures comprises the following specific steps:
obtaining nodes capable of forming node pairs and edges capable of forming edge pairs in the two graph structures through a VF2 algorithm to obtain a plurality of node pairs and edge pairs; marking nodes in the graph structure of the matching image in the node pair as first nodes, marking the area corresponding to the first nodes as first communication domains of the matching image, and obtaining a plurality of first communication domains in the matching image; and marking the node in the graph structure of the target image in the node pair as a second node corresponding to the first node, marking the area corresponding to the second node as a second connected domain of the target image, and obtaining a second connected domain matched with each first connected domain in the target image in the matched image.
5. The city protection and update evaluation method based on three-dimensional space information according to claim 1, wherein the obtaining a plurality of connected domain pairs in the matching image and the target image according to the first connected domain and the second connected domain comprises the following specific steps:
and marking the first connected domain and the matched second connected domain as a connected domain pair to obtain a plurality of connected domain pairs in the two remote sensing images.
6. The city protection and update evaluation method based on three-dimensional space information according to claim 1, wherein the obtaining the satisfaction degree of each connected domain pair according to the intersection ratio of each connected domain pair and the distribution of adjacent connected domain pairs comprises the following specific steps:
acquiring the intersection ratio of any section and a plurality of corresponding reference connected domain pairs, and recording the Euclidean distance of the central points of the two first connected domains in the two connected domain pairs and the Euclidean distance average value of the central points of the two second connected domains as the distance between each reference connected domain pair and the connected domain pair; averaging all the distances to obtain a distribution distance of the section;
acquiring the distribution distance of each segment, marking the obtained sequence as the adjacent distribution sequence of the connected domain pair according to ascending order of the distribution distance, and marking the obtained sequence as the adjacent intersection sequence of the connected domain pair according to the corresponding order of the distribution distance; carrying out linear normalization on adjacent distribution sequences, taking the cosine similarity of the normalized sequences and adjacent intersection sequences as the initial satisfaction degree of the connected domain pair, dividing the difference value obtained by subtracting the initial satisfaction degree from 1 by 2 to obtain a quotient, and taking the quotient as the satisfaction degree of the connected domain pair;
and acquiring the satisfaction degree of each connected domain pair.
7. The method for city protection and update evaluation based on three-dimensional space information according to claim 6, wherein the step of obtaining the intersection ratio of any one segment and a plurality of corresponding reference connected domain pairs comprises the following specific steps:
calculating the cross-over ratio of the two connected domains in each connected domain pair to obtain the cross-over ratio of each connected domain pair; for any one connected domain pair, acquiring a plurality of adjacent connected domain pairs of the connected domain pair, and marking the adjacent connected domain pairs as reference connected domain pairs of the connected domain pair;
acquiring the cross ratio of all reference connected domain pairs of the connected domain pairs, and arranging the cross ratios in ascending order from small to large, wherein the obtained sequence is marked as an adjacent cross ratio sequence of the connected domain pairs; carrying out OTSU multi-threshold segmentation on adjacent cross-over sequences, dividing the adjacent cross-over sequences into a plurality of sections, calculating the average value of the cross-over ratios in each section, and recording the average value as the cross-over ratio of each section; each element in the adjacent cross-over sequence corresponds to a reference connected domain pair, and each section containing a plurality of elements corresponds to a plurality of reference connected domain pairs.
8. The city protection and update evaluation method based on three-dimensional space information according to claim 7, wherein the obtaining the target connected domain pair according to the satisfaction degree comprises the following specific steps:
and taking the product of the intersection ratio and the satisfaction degree of each connected domain pair as the target degree of each connected domain pair, and taking the connected domain pair with the maximum target degree as the target connected domain pair.
9. The city protection and update evaluation method based on three-dimensional space information according to claim 1, wherein the obtaining the direction weight of each direction according to two connected domains in the target connected domain pair comprises the following specific steps:
acquiring a first center point and a second center point;
aligning the first center point with the second center point by translation; the first central point and the second central point which are aligned are marked as target central points, n rays are uniformly made outwards from the target central points, and each ray corresponds to one direction; for any one ray, marking an intersection point of the ray and a first communication domain of a target under superposition as a first intersection point, and marking the Euclidean distance between a target center point and the first intersection point as a first distance in the direction corresponding to the ray; marking the intersection point of the ray and the second communication domain of the target under superposition as a second intersection point, and marking the Euclidean distance between the center point of the target and the second intersection point as a second distance in the direction corresponding to the ray; the ratio of the first distance to the second distance is recorded as the consistency degree of the corresponding directions of the rays; and obtaining the consistency degree of the corresponding directions of each ray, and carrying out softmax normalization on all the consistency degrees, wherein the obtained result is used as the direction weight of each direction.
10. The method for city protection and update evaluation based on three-dimensional space information according to claim 9, wherein the obtaining the first center point and the second center point comprises the following specific steps:
the first communicating domain in the target communicating domain pair is marked as a target first communicating domain, and the center point of the target first communicating domain is marked as a first center point; and (3) marking a second communicating region in the target communicating region pair as a target second communicating region, and marking the center point of the target second communicating region as a second center point.
CN202310965690.3A 2023-08-01 2023-08-01 Urban protection and update evaluation method based on three-dimensional space information Active CN116958172B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310965690.3A CN116958172B (en) 2023-08-01 2023-08-01 Urban protection and update evaluation method based on three-dimensional space information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310965690.3A CN116958172B (en) 2023-08-01 2023-08-01 Urban protection and update evaluation method based on three-dimensional space information

Publications (2)

Publication Number Publication Date
CN116958172A true CN116958172A (en) 2023-10-27
CN116958172B CN116958172B (en) 2024-01-30

Family

ID=88452824

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310965690.3A Active CN116958172B (en) 2023-08-01 2023-08-01 Urban protection and update evaluation method based on three-dimensional space information

Country Status (1)

Country Link
CN (1) CN116958172B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200461A (en) * 2014-08-04 2014-12-10 西安电子科技大学 Mutual information image selected block and sift (scale-invariant feature transform) characteristic based remote sensing image registration method
CN106558072A (en) * 2016-11-22 2017-04-05 重庆信科设计有限公司 A kind of method based on SIFT feature registration on remote sensing images is improved
CN115713694A (en) * 2023-01-06 2023-02-24 东营国图信息科技有限公司 Land surveying and mapping information management method
CN115984309A (en) * 2021-12-10 2023-04-18 北京百度网讯科技有限公司 Method and device for training image segmentation model and image segmentation
CN116310447A (en) * 2023-05-23 2023-06-23 维璟(北京)科技有限公司 Remote sensing image change intelligent detection method and system based on computer vision
CN116433887A (en) * 2023-06-12 2023-07-14 山东鼎一建设有限公司 Building rapid positioning method based on artificial intelligence

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200461A (en) * 2014-08-04 2014-12-10 西安电子科技大学 Mutual information image selected block and sift (scale-invariant feature transform) characteristic based remote sensing image registration method
CN106558072A (en) * 2016-11-22 2017-04-05 重庆信科设计有限公司 A kind of method based on SIFT feature registration on remote sensing images is improved
CN115984309A (en) * 2021-12-10 2023-04-18 北京百度网讯科技有限公司 Method and device for training image segmentation model and image segmentation
CN115713694A (en) * 2023-01-06 2023-02-24 东营国图信息科技有限公司 Land surveying and mapping information management method
CN116310447A (en) * 2023-05-23 2023-06-23 维璟(北京)科技有限公司 Remote sensing image change intelligent detection method and system based on computer vision
CN116433887A (en) * 2023-06-12 2023-07-14 山东鼎一建设有限公司 Building rapid positioning method based on artificial intelligence

Also Published As

Publication number Publication date
CN116958172B (en) 2024-01-30

Similar Documents

Publication Publication Date Title
CN110097093B (en) Method for accurately matching heterogeneous images
CN110427937B (en) Inclined license plate correction and indefinite-length license plate identification method based on deep learning
CN111028277B (en) SAR and optical remote sensing image registration method based on pseudo-twin convolution neural network
CN101980250B (en) Method for identifying target based on dimension reduction local feature descriptor and hidden conditional random field
CN106909902B (en) Remote sensing target detection method based on improved hierarchical significant model
CN109446894B (en) Multispectral image change detection method based on probability segmentation and Gaussian mixture clustering
CN113033520B (en) Tree nematode disease wood identification method and system based on deep learning
CN110111248B (en) Image splicing method based on feature points, virtual reality system and camera
CN111340701B (en) Circuit board image splicing method for screening matching points based on clustering method
CN106447704A (en) A visible light-infrared image registration method based on salient region features and edge degree
CN110992263B (en) Image stitching method and system
CN106548462A (en) Non-linear SAR image geometric correction method based on thin-plate spline interpolation
CN105279769A (en) Hierarchical particle filtering tracking method combined with multiple features
CN105488541A (en) Natural feature point identification method based on machine learning in augmented reality system
CN113361542A (en) Local feature extraction method based on deep learning
Kim et al. A robust matching network for gradually estimating geometric transformation on remote sensing imagery
CN110969212A (en) ISAR image classification method based on spatial transformation three-channel convolution
CN112288758A (en) Infrared and visible light image registration method for power equipment
CN109508674B (en) Airborne downward-looking heterogeneous image matching method based on region division
CN108509835B (en) PolSAR image ground object classification method based on DFIC super-pixels
CN104820992B (en) A kind of remote sensing images Semantic Similarity measure and device based on hypergraph model
CN116167921B (en) Method and system for splicing panoramic images of flight space capsule
CN116958172B (en) Urban protection and update evaluation method based on three-dimensional space information
US11847811B1 (en) Image segmentation method combined with superpixel and multi-scale hierarchical feature recognition
CN103310456A (en) Multi-temporal/multi-mode remote sensing image registration method based on Gaussian-Hermite moments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant