CN115294190B - Method for measuring landslide volume for municipal engineering based on unmanned aerial vehicle - Google Patents
Method for measuring landslide volume for municipal engineering based on unmanned aerial vehicle Download PDFInfo
- Publication number
- CN115294190B CN115294190B CN202211219125.4A CN202211219125A CN115294190B CN 115294190 B CN115294190 B CN 115294190B CN 202211219125 A CN202211219125 A CN 202211219125A CN 115294190 B CN115294190 B CN 115294190B
- Authority
- CN
- China
- Prior art keywords
- matching
- image
- feature
- point
- feature data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000005259 measurement Methods 0.000 claims abstract description 8
- 239000002689 soil Substances 0.000 claims abstract description 6
- 238000012163 sequencing technique Methods 0.000 claims description 6
- 230000001154 acute effect Effects 0.000 claims description 3
- 238000000691 measurement method Methods 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 abstract description 5
- 206010047571 Visual impairment Diseases 0.000 abstract 1
- 230000004927 fusion Effects 0.000 description 7
- 238000013459 approach Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- MTZWHHIREPJPTG-UHFFFAOYSA-N phorone Chemical compound CC(C)=CC(=O)C=C(C)C MTZWHHIREPJPTG-UHFFFAOYSA-N 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/97—Determining parameters from multiple pictures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of vision measurement, in particular to a method for measuring the landslide volume of a soil for municipal engineering based on an unmanned aerial vehicle. According to the method, after images of multiple angles at a collapse position are shot and point cloud data corresponding to each image are obtained, selecting characteristics of points are constructed according to image characteristics of all the points on each image, characteristic points on each image are determined by means of the selecting characteristics, matching of different images is completed by calculating the similarity between the characteristic points on each image, point cloud data corresponding to different matched images can be fused, reconstruction of point cloud data is completed, and finally calculation of the collapse volume can be completed by means of the reconstructed point cloud data. Compared with the prior art, the method for measuring the landslide body does not need to manually set the image control points near the landslide, has stronger adaptability to the terrain at different landslide positions, and improves the measurement accuracy compared with the landslide volume measurement without the image control points.
Description
Technical Field
The invention relates to the technical field of vision measurement, in particular to a method for measuring the landslide volume of a soil for municipal engineering based on an unmanned aerial vehicle.
Background
Collapse is used as an accident with high occurrence rate in municipal engineering, and the subsequent treatment of collapse requires that the volume of collapse is measured firstly. The existing method for measuring the collapse volume mainly comprises a total station measuring method and a GPS measuring method. The total station is simple in collapse volume measurement operation, but has high visibility conditions for a surveying and mapping area, good application effect in a small-range area with good visibility conditions, and poor effect in a large-range area with poor visibility conditions. The method is not restricted by a common condition, and has higher measuring speed and precision than a total station, but the method is possibly interfered by special topographic factors to cause lower precision and higher cost in a measuring area with weak GPS signals. Therefore, the unmanned aerial vehicle low-altitude aerial survey is increasingly widely applied as a new surveying and mapping means.
The unmanned aerial vehicle aerial survey has the advantages of flexibility, high image resolution, high data acquisition speed, less field labor and the like, is suitable for measuring the collapse volume in municipal engineering, and generally utilizes image control points arranged on the ground to carry out oblique photogrammetry in the existing unmanned aerial vehicle aerial survey technology to further generate point cloud data of a collapsed body. However, in this way, firstly, a large amount of manpower is required to arrange the image control points around the collapse position, and when the terrain around the collapse position is complex and dangerous, the arrangement of the image control points cannot be actually completed by measuring personnel, so that the existing aerial survey method of the unmanned aerial vehicle not only needs a large amount of manpower, but also cannot adapt to the situation that the collapse position cannot be artificially set with the control points due to the dangerous terrain at the collapse position, and the accurate position of the unmanned aerial vehicle cannot be corrected according to the position of the control point when the positioning of the unmanned aerial vehicle has errors due to the incapability of setting the image control points, so that the error occurs in the measured point cloud data.
Disclosure of Invention
In order to solve the problem that the soil collapse volume measurement cannot be accurately and efficiently completed in the prior art, the invention provides an unmanned aerial vehicle-based soil collapse volume measurement method for municipal engineering, which adopts the following technical scheme:
the invention discloses an unmanned aerial vehicle-based method for measuring landslide volume for municipal engineering, which comprises the following steps:
shooting images of multiple angles at a collapse position and acquiring point cloud data corresponding to each image;
obtaining the selecting and extracting characteristics of each point according to the difference of the gray gradient direction of each point on the image relative to each point on the neighborhood, the difference of the gray gradient amplitude and the difference of the gray value, and determining the characteristic points of the image according to the selecting and extracting characteristics of all the points on the image;
determining all edges on the image, selecting a set number of edges closest to a certain feature point on the image, and taking the normalized gray gradient amplitude values of all points on the set number of edges as a matched feature data set of the feature point, thereby determining the matched feature data set of all feature points on the image, wherein the normalized gray gradient amplitude values are the matched feature data of the feature point;
determining the similarity between the feature points of different images according to the matched feature data sets of all the feature points on each image;
selecting an initial image from all images as a matching target image, taking images except the initial image as images to be matched, determining an optimal matching mode of the matching target image and the feature points on each image to be matched according to the expectation that the similarity between the feature points of different images is the maximum, calculating the overall similarity between the matching target image and each image to be matched in the optimal matching mode, selecting the image to be matched with the maximum overall similarity between the matching target image and the matching target image as a matching object of the matching target image, and completing the matching of the matching target image;
taking the image to be matched which is matched latest as a new matching target image, taking the image which is not matched as the new image to be matched, determining a matching object of the new matching target image from the new image to be matched, repeating the process of determining the matching object, and completing the matching of all the images;
and fusing the point cloud data corresponding to the mutually matched images to complete the reconstruction of the point cloud data, and calculating the collapse volume according to the reconstruction result of the point cloud data.
The invention has the beneficial effects that:
according to the method, after pictures of a plurality of angles at the collapse position are shot, matching between different images is completed through the characteristic points on each image, so that fusion between local point cloud data corresponding to different images is completed, reconstructed point cloud data is obtained, and the collapse volume is obtained through calculation of the reconstructed point cloud data. According to the method, the image control points do not need to be manually arranged around the collapse position by measuring personnel, so that the labor consumption is reduced, and the collapse volume measurement accuracy is improved compared with the situation that the image control points cannot be arranged due to the fact that the terrain around the collapse position is severe, namely, the method is efficient and accurate.
Further, the specific method for obtaining the selecting characteristics of each point according to the difference of the gray scale gradient direction, the difference of the gray scale gradient amplitude and the difference of the gray scale value of each point on the image relative to each point on the neighborhood thereof comprises the following steps:
wherein,showing the culling feature of the jth point on the ith image,indicating the ith imageNormalized gray gradient amplitude of point and the first in its eight neighborhoodsThe difference in the normalized gray scale gradient magnitudes of the points,representing the ith image onNormalized gray value of a point and the first in its eight neighborhoodsThe difference between the normalized gray-scale values of the points,indicating the ith imageThe gray gradient direction of the point and the first in the eight neighborhoods thereofThe normalized value of the difference between the gray scale gradient directions of the points, the difference between the two gray scale gradient directions is the acute angle included angle between two straight lines where the two gray scale gradient directions are located, the value range of the difference between the two gray scale gradient directions is [0 ",]。
further, the specific process of determining the similarity between the feature points of different images according to the matching feature data sets of all the feature points on each image is as follows:
randomly selecting a feature point in one image as a first feature point, randomly selecting a feature point in the other image as a second feature point, calculating the distance value from each point on the edge of the set number closest to the first feature point and the distance value from each point on the edge of the set number closest to the second feature point, sequencing all the obtained distance values according to the sequence from large to small to obtain the sequencing serial number of each distance value, and taking the sequencing serial number of each distance value as the matching priority of the normalized gray gradient amplitude value at each point on the edge corresponding to the distance value, wherein the smaller the serial number is, the higher the matching priority is;
selecting matching feature data with the highest priority from matching feature data sets of the first feature point and the second feature point, judging which of the matching feature data sets of the first feature point or the second feature point the matching feature data belongs to, after the judgment is finished, searching matching feature data with the smallest normalized gray gradient amplitude difference between the matching feature data with the highest priority and the matching feature data with the highest priority in the matching feature data sets which do not comprise the matching feature data with the highest priority, wherein the smallest normalized gray gradient amplitude difference obtained by the two matching feature data is the first matching feature data difference value of the first feature point and the second feature point;
removing the two matching feature data with the first matching feature data difference value from the matching feature data set of the first feature point and the second feature point, selecting new matching feature data with the highest priority from the matching feature data set after the removing operation, judging which of the matching feature data sets of the first feature point or the second feature point belongs to, searching the matching feature data with the minimum normalized gray gradient amplitude difference value between the two matching feature data in the matching feature data set without the new matching feature data with the highest priority after the judgment is finished, and obtaining the minimum normalized gray gradient amplitude difference value from the two matching feature data, namely the second matching feature data difference value of the first feature point and the second feature point;
repeating the process of obtaining the difference value of the matched feature data until at least one of the matched feature data sets of the first feature point and the second feature point becomes an empty set, defining the difference value of the matched feature data of each matched feature data as 1 in the matched feature data set of the non-empty set, wherein the smaller the difference value of the matched feature data is, the more similar the first feature point and the second feature point are;
multiplying each matching feature data difference value by the sum of the weights of all matching feature data used for determining the matching feature data difference value, and then summing all the obtained products to obtain the similarity between the first feature point and the second feature point;
all the matching feature data used for determining the difference value of the matching feature data comprise the matching feature data for obtaining the minimum normalized gray gradient amplitude difference value and the matching feature data with the difference value of the matching feature data defined as 1;
the weight of the matched characteristic data is as follows:
wherein,is the weight of the matching feature data with sequence number q,is the total number of matching features.
Further, after determining an optimal matching mode of the matching target image and the feature points on each image to be matched, calculating the similarity of all feature point pairs between the matching target image and the image to be matched in the optimal matching mode, calculating the variance of the similarity of all feature point pairs, obtaining a variance corresponding to the matching target image and each image to be matched, determining the maximum value of each obtained variance, matching the matching target image corresponding to the maximum value in the variances with the image to be matched, and completing the matching of the matching target image.
Drawings
FIG. 1 is a flow chart of the method for measuring landslide volume for municipal works based on unmanned aerial vehicles according to the invention.
Detailed Description
The overall concept of the invention is as follows: after acquiring collapse position images at different angles and point cloud data corresponding to each image, constructing selection characteristics of each point according to image characteristic information of all points on each image, and determining characteristic points on each image according to the selection characteristics; determining a matching feature data set of each feature point according to the adjacent image information of each feature point, then calculating the similarity between the feature points on different images, and completing the matching between the images at different angles according to the similarity; and reconstructing point cloud data corresponding to the images at different angles according to the matching result of the images at different angles, and calculating the collapse volume based on the reconstruction result of the point cloud data.
The invention relates to a method for measuring landslide volume for municipal engineering based on an unmanned aerial vehicle, which is described in detail below with reference to the accompanying drawings and embodiments.
The method comprises the following steps:
an embodiment of an unmanned aerial vehicle-based method for measuring landslide volume of soil for municipal engineering is shown in figure 1, and comprises the following specific processes:
1. and acquiring images of a plurality of angles at the collapse position and point cloud data corresponding to each image.
In the embodiment, the multi-angle image and the corresponding point cloud data of the landslide are obtained by the unmanned aerial vehicle, and of course, in other embodiments, the multi-angle image and the corresponding point cloud data of the landslide can be obtained by adopting other existing methods.
And setting an unmanned aerial vehicle air line to enable the unmanned aerial vehicle air line to completely cover the collapse place, and arranging a high-definition camera and a laser range finder on the unmanned aerial vehicle. The method includes the steps that since a sufficiently clear image of a collapse position is obtained, the unmanned aerial vehicle needs to keep low-altitude navigation, the image of the collapse position obtained by the unmanned aerial vehicle is local information of the collapse position correspondingly, and point cloud data corresponding to the image of the collapse position obtained at the same time is incomplete and is local point cloud data.
In the process of the unmanned aerial vehicle navigating, an image of a collapse position and local point cloud data corresponding to the image are obtained at each shooting position, and therefore the images of the collapse positions at multiple angles and the corresponding local point cloud data are obtained. To be provided withAn ith gray scale image representing the collapse place obtainedRepresenting a grayscale imageCorresponding local point cloud data and assumingAnd the obtained gray level images correspond to different shooting angles, namely one shooting angle corresponds to one gray level image.
2. And constructing the culling features of all points on the corresponding image according to the image features of all points on each image, and then determining the feature points on the corresponding image according to the culling features of all points.
Due to the image to be determinedThe feature points of (2) are used for matching, so the feature points should have obvious difference from other points on the image, and generally such points all include the figureEdge information of the image and gradation change information. Therefore, the present embodiment constructs the culling feature of each point on the image as follows:
computing images using Sobel operatorsThe normalized gray scale gradient amplitude and the gray scale gradient direction of each point. To be provided withRepresenting imagesGo to the firstNormalized gray scale gradient magnitude of a dot toRepresenting imagesGo to the firstGray scale gradient direction of dots, in order toRepresenting imagesGo to the firstGray value of a dot as an imageGo to the firstThe gray gradient direction of a point relative to its neighborhood,Establishing selection characteristics of the point by gray gradient amplitude and gray value difference degreeThe formula is as follows:
wherein,to indicate the first on the imageNormalized gray gradient amplitude of point and the first in eight neighborhoods thereofThe difference in normalized gray scale gradient magnitude for a point,represent the first on the imageNormalized gray value of a point and the first in its eight neighborhoodsThe difference in the normalized gray-scale values of the dots,represent the first on the imageGray gradient direction of point and the eighth neighborhood thereofNormalized value of the difference in gray gradient direction of the dots. The difference value of the two gray scale gradient directions is the two gray scale gradientsThe acute angle included angle between the two straight lines where the direction is located, so the value range of the difference value of the two gray scale gradient directions is [0,]。
to this end, forOn the first imagePoints, which can be calculated to obtain the corresponding selection features. For an image, sorting all points on the image from large to small, and selecting the top of the imageThe points serve as characteristic points of the image.
The above operation is carried out on each image, and the image on each image can be obtainedA characteristic point ofDenotes the firstOn the pictureThe number of the characteristic points is one,this embodiment is preferredThe value is 100.
3. And determining a matching feature data set of each feature point according to the adjacent image information of each feature point on each image, and determining the similarity between the feature points on different images according to the matching feature data sets of each feature point on each image.
For an image, each feature point on the image is a point with stronger image features, so that the adjacent areas of the feature points have more texture features, or the feature points are part of the texture features. Therefore, the present embodiment takes the texture features of the neighboring region of each feature point as the matching feature data set of each feature point.
The determination of the matching feature data set is as follows:
using CANNY operator to determineAll edges on the image are calculatedThe distance value of each characteristic point and each edge on each image; the distance value between a feature point and an edge is the minimum value of the distance values from the feature point to all the points on the edge.
Will be firstThe distance values from all edges on each image to the kth characteristic point are sorted from small to large, and the front of the image is selectedA distance value, then, for the k-th feature point, its nearest neighborAnd taking the normalized gray gradient amplitude of all the points on each edge as a matching feature data set of the feature point.
Through the above process, a set of matched feature data set composed of normalized gray scale gradient amplitudes corresponding to each feature point on each image can be obtained.
After determining the matching feature data set of the respective feature points on each image, the process of determining the similarity between the feature points on different images is as follows:
assuming that the feature points a and b belong to different images, respectively, calculatingComparing the point corresponding to each normalized gray scale gradient amplitude in the matched feature data set with the point corresponding to each normalized gray scale gradient amplitude in the matched feature data setAnd simultaneously calculate the distance ofComparing the point corresponding to each normalized gray scale gradient amplitude in the matched feature data set with the point corresponding to each normalized gray scale gradient amplitude in the matched feature data setThe distances are then sorted from large to small in a unified manner, the sequence number of each distance in the sorting result is used as the normalized gray gradient amplitude value corresponding to each distance, namely the matching priority label of the matching feature data, and the closer the sorting of a certain distance in the sorting result is, the smaller the sequence number of the corresponding matching priority label is, and the higher the matching priority of the corresponding matching feature data is.
Selecting matching feature data corresponding to a matching priority label with the highest priority from the matching feature data sets of a and b, namely selecting the matching feature data with the highest matching priority, judging which one of the matching feature data sets of a or b belongs to, after the judgment is finished, searching the matching feature data with the highest matching priority in the other matching feature data set, normalizing the matching feature data with the smallest gray gradient amplitude difference, obtaining the matching priority label of the matching feature data, and finishing the pairing between the two matching feature data; the normalized gray gradient amplitude difference value is used as a difference value of first matching feature data of the feature points a and b, the smaller the difference value is, the more similar the feature points a and b are, and the smaller the difference value is, the closer the difference value is to 0.
And after the judgment is finished, searching for the matching characteristic data with the minimum normalized gray gradient amplitude difference value between the matching characteristic data and the other matching characteristic data set, and taking the normalized gray gradient amplitude difference value as a second matching characteristic data difference value of the characteristic points a and b.
Repeating the above operation to obtain a series of matching feature data difference values, until at least one of the matching feature data sets of a and b becomes an empty set, directly defining the matching feature data difference value of the matching feature data which is not successfully paired in the matching feature data sets of the remaining non-empty sets as a maximum value, namely 1.
Weighting the acquired series of matching feature data difference values based on the matching priority labels, wherein the weighting process comprises the following steps:
for each matching feature data in the matching feature data sets of a and b, the higher the priority of the corresponding matching priority label, the greater the weight of the matching feature data in the process of calculating the similarity between a and b.
With q as the sequence number of the matching priority label,and Q represents the total number of the matching priority labels and also represents the total number of the matching feature data in the matching feature data sets of a and b.
and for one matching feature data difference value, determining the sum of the weights of all matching feature data of the matching feature data difference value as the weight of the matching feature data difference value. That is, if the difference value of a certain matching feature data is determined after the two matching feature data are paired, the weight of the difference value of the matching feature data is the sum of the weights corresponding to the two matching feature data; if the difference value of a certain matching feature data is determined by matching feature data which is not successfully paired, the weight of the difference value of the matching feature data is the weight of the matching feature data which is not successfully paired.
Summing all the matched feature data difference values based on the weight corresponding to each matched feature data difference value to obtain the difference value between the feature points a and bSo as to determine the similarity between the feature points a and b as 1-The more the similarity approaches 1, the more the representative feature points a and b are similar, and the more the similarity approaches 0, the representative feature points a and b are dissimilar.
Therefore, the similarity between the feature points on different images can be obtained.
4. And matching the images acquired at all angles according to the similarity between the characteristic points on different images.
And randomly selecting one of all the images as an initial image, and calculating the matching degree of the initial image and each image in the rest images.
Specifically, for two images, each image corresponds to K feature points, K pairs of feature points can be constructed by the feature points of the two images, the similarity of any pair of feature points between the two images is calculated, and the total similarity of the feature points between the two images at the moment is obtained by summing the similarities of all the K pairs of feature points.
Since the total similarity is different due to the difference of the feature point pairing modes, in order to determine the most accurate similarity between the two images, that is, the maximum total similarity, the embodiment takes the total similarity as an expectation, and then completes the similarity calculation between the two images with the maximum expectation, that is, completes the confirmation of the optimal feature point pairing mode between the two images.
Preferably, considering that the purpose of image matching is to fuse local point cloud data to reconstruct a point cloud, and considering that the efficiency and the accuracy of image fusion are both considered, that is, the efficiency and the accuracy of point cloud data fusion are both considered actually, half of the feature points of the two images should be completely matched, the other half of the feature points should be completely unmatched, the half of the complete matching is the splicing feature points of the two images, i.e., the image overlapping portion, and the other half of the unmatched feature points is the non-overlapping portion, i.e., the effective basis for reconstructing the point cloud data.
In the optimal image matching mode considered by the invention, half of the feature points of the two images are completely matched, and the other half of the feature points are completely unmatched, so that the value of the similarity of K pairs of feature points constructed between the two images is half 1 and the other half is 0, at this time, the variance of the obtained similarity data is 0.25, while the variance of the similarity data is less than 0.25 in the case that the other half of the feature points deviating from the two images are completely matched and the other half of the feature points are completely unmatched.
Therefore, in order to give consideration to the image fusion efficiency and the fusion accuracy as much as possible, after the similarity calculation between all the images to be matched and the current matching target image is completed in a maximum expected mode, that is, after the optimal feature point matching mode between all the images to be matched and the current matching target image is determined, the images to be matched are selected according to the similarity data of the K pairs of feature points in the optimal feature point matching mode, and the images to be matched, which are formed by the images to be matched and the matching target image and have the variance of the similarity data of the K pairs of feature points closest to 0.25, are selected as the matching objects of the current matching target image.
The matching target images are those including the initial images and used as matching target images, and the images to be matched are images which have not been used as matching targets.
And on the basis of completing the optimal feature point matching mode between the two images with the maximum expectation, sequentially completing the matching of all the images according to the selection standard with the maximum variance of the similarity data of the feature point pairs.
It is obvious that, after the determination of the optimal feature point pairing mode between the two images is completed with the maximum expectation, the image to be matched which is most similar to the matching target image can be actually determined, so in other embodiments, the determined image to be matched which is most similar to the matching target image can be directly used as the matching object of the matching target image, and the matching of all the images is completed in sequence.
5. And reconstructing point cloud data corresponding to each image according to the matching result between the images acquired from each angle, and calculating the collapse volume according to the reconstruction result.
After the two images are matched, the point cloud data corresponding to the two images can be fused.
Specifically, feature point pairs with similarity greater than a set threshold among feature point pairs formed between two images are selected and used as corresponding points in the point cloud data fusion process, a rotation and translation matrix between the two point cloud data is obtained through the determined corresponding points, the point cloud data respectively corresponding to the two images are converted into the same coordinate system through the rotation and translation matrix, and the point cloud data respectively corresponding to the two images are fused.
And finishing the fusion of all the point cloud data according to the fusion modes of the point cloud data corresponding to the two matched images respectively, namely finishing the reconstruction of all the point cloud data.
On the basis of the reconstruction result of the point cloud data, the collapse volume can be calculated by using the conventional volume calculation method.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.
Claims (3)
1. An unmanned aerial vehicle-based method for measuring landslide volume for municipal engineering is characterized by comprising the following steps:
shooting images of multiple angles at a collapse position and acquiring point cloud data corresponding to each image;
obtaining the selecting and extracting characteristics of each point according to the difference of the gray gradient direction of each point on the image relative to each point on the neighborhood, the difference of the gray gradient amplitude and the difference of the gray value, and determining the characteristic points of the image according to the selecting and extracting characteristics of all the points on the image;
determining all edges on the image, selecting a set number of edges closest to a certain feature point on the image, and taking the normalized gray gradient amplitude values of all points on the set number of edges as a matched feature data set of the feature point, thereby determining the matched feature data set of all feature points on the image, wherein the normalized gray gradient amplitude values are the matched feature data of the feature point;
determining the similarity between the feature points of different images according to the matched feature data sets of all the feature points on each image;
selecting an initial image from all images as a matching target image, taking images except the initial image as images to be matched, determining an optimal matching mode of the matching target image and the feature points on each image to be matched according to the expectation that the similarity between the feature points of different images is maximum, calculating the overall similarity between the matching target image and each image to be matched in the optimal matching mode, selecting the image to be matched with the maximum overall similarity between the matching target image and the matching target image as a matching object of the matching target image, and completing the matching of the matching target image;
taking the image to be matched which is matched latest as a new matching target image, taking the image which is not matched as the new image to be matched, determining a matching object of the new matching target image from the new image to be matched, repeating the process of determining the matching object, and completing the matching of all the images;
fusing point cloud data corresponding to the mutually matched images to complete point cloud data reconstruction, and calculating the collapse volume according to the reconstruction result of the point cloud data;
the specific method for obtaining the selecting characteristics of each point according to the difference of the gray gradient direction, the difference of the gray gradient amplitude and the difference of the gray value of each point on the image relative to each point on the neighborhood comprises the following steps:
wherein,showing the culling feature of the jth point on the ith image,representing the ith image onNormalized gray gradient amplitude of point and the first in eight neighborhoods thereofThe difference in normalized gray scale gradient magnitude for a point,representing the ith image onNormalized gray value of a point and the eighth neighborhood thereofThe difference between the normalized gray-scale values of the points,indicating the ith imageThe gray gradient direction of the point and the first in the eight neighborhoods thereofThe normalized value of the difference between the gray scale gradient directions of the points, the difference between the two gray scale gradient directions is the acute angle included angle between two straight lines where the two gray scale gradient directions are located, the value range of the difference between the two gray scale gradient directions is [0 ",];
sorting the selected features of all points on the image from large to small, and selecting the front set number of points as the feature points of the image.
2. The method for volumetric measurement of landslide for municipal works based on unmanned aerial vehicles according to claim 1, wherein the specific process of determining similarity between feature points of different images according to the matching feature data set of all feature points on each image is as follows:
randomly selecting a feature point in one image as a first feature point, randomly selecting a feature point in another image as a second feature point, calculating the distance value from each point on the edge of the set number closest to the first feature point and the distance value from each point on the edge of the set number closest to the second feature point, sequencing all the obtained distance values according to the sequence from large to small to obtain the sequencing number of each distance value, and taking the sequencing number of each distance value as the matching priority of the normalized gray gradient amplitude value at each point on the edge corresponding to the distance value, wherein the smaller the sequence number is, the higher the matching priority is;
selecting matching feature data with the highest priority from matching feature data sets of the first feature point and the second feature point, judging which matching feature data belongs to the matching feature data sets of the first feature point or the second feature point, after the judgment is finished, searching matching feature data with the smallest normalized gray gradient amplitude difference between the matching feature data with the highest priority and the matching feature data with the highest priority in the matching feature data sets which do not comprise the matching feature data with the highest priority, wherein the smallest normalized gray gradient amplitude difference obtained by the two matching feature data is the first matching feature data difference value of the first feature point and the second feature point;
removing the two matching feature data with the first matching feature data difference value from the matching feature data set of the first feature point and the second feature point, selecting new matching feature data with the highest priority from the matching feature data set after the removing operation, judging which of the matching feature data sets of the first feature point or the second feature point belongs to, searching the matching feature data with the minimum normalized gray gradient amplitude difference value between the two matching feature data in the matching feature data set without the new matching feature data with the highest priority after the judgment is finished, and obtaining the minimum normalized gray gradient amplitude difference value from the two matching feature data, namely the second matching feature data difference value of the first feature point and the second feature point;
repeating the process of obtaining the difference value of the matched feature data until at least one of the matched feature data sets of the first feature point and the second feature point becomes an empty set, defining the difference value of the matched feature data of each matched feature data as 1 in the matched feature data set of the non-empty set, wherein the smaller the difference value of the matched feature data is, the more similar the first feature point and the second feature point are;
multiplying each matching feature data difference value by the sum of the weights of all matching feature data used for determining the matching feature data difference value, and then summing all the obtained products to obtain the similarity between the first feature point and the second feature point;
all the matching feature data used for determining the difference value of the matching feature data comprise the matching feature data for obtaining the minimum normalized gray gradient amplitude difference value and the matching feature data with the difference value of the matching feature data defined as 1;
the weight of the matched feature data is as follows:
3. The unmanned aerial vehicle-based soil landslide volume measurement method for the municipal engineering based on the unmanned aerial vehicle is characterized in that after the optimal matching mode of the matching target image and the feature points on the images to be matched is determined, the similarity of all feature point pairs between the matching target image and the images to be matched in the optimal matching mode is calculated, the variance of the similarity of all the feature point pairs is calculated, the matching target image corresponds to each image to be matched to obtain a variance, the maximum value of the obtained variances is determined, and the matching target image corresponding to the maximum value of the variances is matched with the images to be matched to complete the matching of the matching target image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211219125.4A CN115294190B (en) | 2022-10-08 | 2022-10-08 | Method for measuring landslide volume for municipal engineering based on unmanned aerial vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211219125.4A CN115294190B (en) | 2022-10-08 | 2022-10-08 | Method for measuring landslide volume for municipal engineering based on unmanned aerial vehicle |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115294190A CN115294190A (en) | 2022-11-04 |
CN115294190B true CN115294190B (en) | 2022-12-30 |
Family
ID=83835019
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211219125.4A Active CN115294190B (en) | 2022-10-08 | 2022-10-08 | Method for measuring landslide volume for municipal engineering based on unmanned aerial vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115294190B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113850869A (en) * | 2021-09-10 | 2021-12-28 | 国网重庆市电力公司建设分公司 | Deep foundation pit collapse water seepage detection method based on radar scanning and image analysis |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006106522A2 (en) * | 2005-04-07 | 2006-10-12 | Visionsense Ltd. | Method for reconstructing a three- dimensional surface of an object |
CN108765298A (en) * | 2018-06-15 | 2018-11-06 | 中国科学院遥感与数字地球研究所 | Unmanned plane image split-joint method based on three-dimensional reconstruction and system |
CN114542078B (en) * | 2022-03-10 | 2024-07-23 | 嵩县山金矿业有限公司 | Well wall repairing method suitable for collapse of intermediate section of drop shaft |
CN114742876B (en) * | 2022-06-13 | 2022-09-06 | 菏泽市土地储备中心 | Land vision stereo measurement method |
-
2022
- 2022-10-08 CN CN202211219125.4A patent/CN115294190B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113850869A (en) * | 2021-09-10 | 2021-12-28 | 国网重庆市电力公司建设分公司 | Deep foundation pit collapse water seepage detection method based on radar scanning and image analysis |
Also Published As
Publication number | Publication date |
---|---|
CN115294190A (en) | 2022-11-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111882612B (en) | Vehicle multi-scale positioning method based on three-dimensional laser detection lane line | |
CN107704821B (en) | Vehicle pose calculation method for curve | |
EP3637371B1 (en) | Map data correcting method and device | |
CN104484648B (en) | Robot variable visual angle obstacle detection method based on outline identification | |
Huber et al. | A new approach to 3-d terrain mapping | |
CN108986037A (en) | Monocular vision odometer localization method and positioning system based on semi-direct method | |
CN111291676A (en) | Lane line detection method and device based on laser radar point cloud and camera image fusion and chip | |
CN109211198B (en) | Intelligent target detection and measurement system and method based on trinocular vision | |
CN110288659B (en) | Depth imaging and information acquisition method based on binocular vision | |
CN115717894A (en) | Vehicle high-precision positioning method based on GPS and common navigation map | |
CN110197173B (en) | Road edge detection method based on binocular vision | |
CN109461132B (en) | SAR image automatic registration method based on feature point geometric topological relation | |
CN104121902A (en) | Implementation method of indoor robot visual odometer based on Xtion camera | |
CN112070756B (en) | Three-dimensional road surface disease measuring method based on unmanned aerial vehicle oblique photography | |
CN112464812A (en) | Vehicle-based sunken obstacle detection method | |
CN113313659B (en) | High-precision image stitching method under multi-machine cooperative constraint | |
Konrad et al. | Localization in digital maps for road course estimation using grid maps | |
CN112805766A (en) | Apparatus and method for updating detailed map | |
CN113049184A (en) | Method, device and storage medium for measuring mass center | |
CN112348897A (en) | Pose determination method and device, electronic equipment and computer readable storage medium | |
CN115908539A (en) | Target volume automatic measurement method and device and storage medium | |
CN118168545A (en) | Positioning navigation system and method for weeding robot based on multi-source sensor fusion | |
CN113219472B (en) | Ranging system and method | |
US20210304518A1 (en) | Method and system for generating an environment model for positioning | |
Hasheminasab et al. | Linear Feature-based image/LiDAR integration for a stockpile monitoring and reporting technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |