CN111457930A - High-precision mapping positioning method combining vehicle-mounted L idar and unmanned aerial vehicle - Google Patents
High-precision mapping positioning method combining vehicle-mounted L idar and unmanned aerial vehicle Download PDFInfo
- Publication number
- CN111457930A CN111457930A CN202010252893.4A CN202010252893A CN111457930A CN 111457930 A CN111457930 A CN 111457930A CN 202010252893 A CN202010252893 A CN 202010252893A CN 111457930 A CN111457930 A CN 111457930A
- Authority
- CN
- China
- Prior art keywords
- unmanned aerial
- aerial vehicle
- points
- image
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
- G01C21/30—Map- or contour-matching
- G01C21/32—Structuring or formatting of map data
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Image Processing (AREA)
Abstract
The invention belongs to the technical field of spatial data processing and positioning, and discloses a high-precision mapping and positioning method combining a vehicle-mounted L idar and an unmanned aerial vehicle, which comprises the steps of obtaining a control surface element, extracting feature points of images of the unmanned aerial vehicle subjected to initial orientation processing, performing feature matching between the images, obtaining homonymous feature points between the images of the unmanned aerial vehicle, performing gross error elimination on mismatching points by using a robust estimation method, establishing a one-to-many or one-to-one mapping relation between the feature points on the images of the feature surface element and the unmanned aerial vehicle, further performing refined solution on the images of the unmanned aerial vehicle and external parameters of a camera by using a light beam method adjustment iteration, and obtaining high-precision azimuth elements and parameters.
Description
Technical Field
The invention belongs to the technical field of spatial data processing and positioning, and particularly relates to a high-precision mapping positioning method by combining vehicle-mounted L idar and an unmanned aerial vehicle.
Background
At present, with the gradual arrival of the 5G era, the related technology of the unmanned technology is matured day by day, and the automatic driving gradually occupies a place in the development of the national economic society. The high-precision map is used as a scarce resource in the field of unmanned driving and is just needed, plays a core role in the whole field, can help the automobile to sense complex road information such as gradient, curvature, course and the like in advance, and is combined with intelligent path planning to make a correct decision for the automobile. Therefore, the production and manufacturing of high-precision maps, especially high-precision road maps, are the most difficult and important links of unmanned technology.
The necessary condition for manufacturing the high-precision map is to collect and integrate related geographic data of a ground range covered by the high-precision map, the current domestic commonly used data collection mode is mainly to scan a region to be measured by using a vehicle-mounted lidar, finally generate point cloud, manufacture the high-precision map on the basis of the point cloud, and in the subsequent description, point cloud data obtained by scanning a data collection vehicle by using a road as a main object and performing subsequent processing is called vehicle-mounted lidar point cloud data.
In the surveying and mapping field, especially the photogrammetry field, unmanned aerial vehicle is as a new emerging carrier, because its characteristics such as flexible, the operation wide range, with low costs make unmanned aerial vehicle survey and map and receive the favor of relevant practitioner more and more, its application is also increasingly extensive.
Simultaneously, because the characteristics of unmanned aerial vehicle mapping data for unmanned aerial vehicle data and on-vehicle lidar point cloud data have good complementarity, consequently, adopt unmanned aerial vehicle mapping data to supply on-vehicle lidar point cloud data can effectively promote data acquisition efficiency and precision, and then simplify whole high accuracy map preparation flow.
Because the unmanned aerial vehicle and the measuring vehicle are two different measuring carriers, and the unavoidable measurement error exists, the data of the unmanned aerial vehicle and the point cloud data of the vehicle-mounted lidar cannot be strictly corresponding, and the data requirement of high-precision map manufacturing cannot be met, therefore, the data of the unmanned aerial vehicle and the point cloud data of the vehicle-mounted lidar need to be processed and transformed, so that the unmanned aerial vehicle and the point cloud data of the vehicle-mounted lidar can be well corresponding within the precision requirement range, and the process is called registration.
The main objects of the vehicle-mounted lidar point cloud data are roads and pavements, and the vehicle-mounted lidar point cloud data is mainly characterized in that ① mass data are distributed in a belt shape, a large number of holes exist in a non-pavement area, ② the condition that a large number of point clouds are coplanar and the fitted plane has strong directivity, and ③ a batch of discrete points which need to be taken as error elimination may occur due to the fact that other vehicles exist on the roads and the data acquisition time is long.
Because the data obtained by the unmanned aerial vehicle is mainly image data with position information, and the data obtained by the three-dimensional acquisition vehicle is laser point cloud data, the problem of registration between the image and the point cloud is always a problem to be solved in the industry.
① carries out registration of images and point clouds based on semantic information, ② carries out registration of point clouds and images based on registration among the point clouds.
The main idea of semantic information-based registration is to extract semantic information on a point cloud and an image respectively, in most cases, a series of feature points on the point cloud and the image, and to correspond a three-bit feature point on the point cloud with a two-bit feature point on the image, and to determine a transformation parameter of the image by using the three-bit feature point as a control, so that the two are brought into the same coordinate frame.
Based on the thought, two main technical methods are provided, wherein one of the two main technical methods is registration by using lidar data intensity information, the main thought is that feature points are extracted by using an intensity image generated by lidar in an image processing mode and are matched with the feature points of the unmanned aerial vehicle image, and then a corresponding relation is obtained. And secondly, extracting semantic information by directly utilizing the lidar three-dimensional point cloud data, and matching the semantic information with the characteristic points on the image to further complete registration. The method has the advantages of high precision, strong reliability and small calculation amount during registration, and has the defects that semantic information of point clouds and images needs to be obtained simultaneously, and semantic segmentation is often needed to be carried out on the images and the point clouds, so that the processing difficulty of data is improved by one level, and the method has a better effect only on data with typical ground objects at present. For a high-precision map required by unmanned driving, the main object is a road surface, the characteristics are often not obvious, and effective segmentation is difficult to perform, so that the method has an unsatisfactory effect.
The main idea of the registration between the point clouds is that the unmanned aerial vehicle image is used for firstly carrying out light beam adjustment to generate sparse point clouds, matching points of the sparse point clouds in lidar point clouds are found to form a corresponding relation, transformation parameters of the point clouds are determined, the sum of the distances between the two point clouds is enabled to be minimum, and the external orientation element of each image is recalculated.
The method has the advantages of high precision and good effect, but only can process rigid transformation and cannot process non-rigid transformation, and the method needs a good initial value, otherwise convergence is difficult; the non-rigid registration technology is mainly characterized in that after the rigid registration technology is processed, lidar point cloud is used as a control condition, the image data is subjected to secondary space-triplet adjustment, and the external orientation element of each image is adjusted, so that the non-rigid registration is completed.
An important step of the idea is determination of a corresponding relation between point clouds, which requires that coverage objects of two types of point cloud data are approximately the same, and the order of the point cloud densities are approximately equivalent, however, the point cloud density of a sparse point cloud obtained by an image beam method is often far smaller than that of a vehicle-mounted laser lidar point cloud, and due to the problem of view angles of two obtaining modes, unmanned aerial vehicle image data often can only obtain data of a top surface, side data are less, the point cloud is uniformly distributed integrally, the vehicle-mounted lidar point cloud data can obtain more side data, the top surface is less, and meanwhile, a large number of cavities generated in non-road areas exist. Therefore, a large amount of mismatching phenomena often exist in the corresponding points directly found between the two point clouds, and the registration accuracy is seriously influenced.
Through the above analysis, the problems and defects of the prior art are as follows: in the prior art, in a scheme for solving the registration problem of image and point cloud registration based on semantic information, the intensity image of the lidar is often poor in geometric accuracy, and finally the registration accuracy is often not high. Semantic information of the point cloud and the image needs to be obtained at the same time, and semantic segmentation is often required to be performed on the image and the point cloud, so that the difficulty in data processing is increased. For a required high-precision map, the features are often not obvious, and effective segmentation is difficult to perform, so that the effect of performing semantic segmentation on images and point clouds is not ideal.
In the existing registration scheme for carrying out point cloud and image registration based on point cloud, rigid transformation can only be processed, non-rigid transformation cannot be processed, and the method needs a better initial value, otherwise convergence is possibly difficult; and in the determination of the corresponding relation between the point clouds, the point cloud density of the sparse point cloud obtained by the image beam method is often far less than that of the vehicle-mounted laser lidar point cloud, and because of the visual angle problem of the two acquisition modes, the unmanned aerial vehicle image data can only obtain the data of the top surface, the side data are less, the point cloud is uniformly distributed, the vehicle-mounted lidar point cloud data acquire more side data, the top surface is less, and meanwhile, a large number of cavities generated in non-road areas exist. Therefore, a large amount of mismatching phenomena often exist in the corresponding points directly found between the two point clouds, and the registration accuracy is seriously influenced.
In the prior art, the registered point cloud data and the image data are similar in carrier and same in visual angle, and the processing effect on the data with larger visual angle difference is not ideal.
The difficulty in solving the above problems and defects is: the related method needs to have good universality, simplicity, economy and practicability. For the registration scheme of the image and the point cloud based on the semantic information, the biggest defect is that the calculation amount is complex, the time consumption is long, and the intensity image precision is poor in the semantic segmentation process. In a point cloud and image registration scheme based on point cloud registration, the defects to be solved are that point cloud density is seriously mismatched, visual angle difference is large, texture features are repeated and the like. In terms of difficulty, defects in the registration scheme of point clouds and images based on registration between the point clouds are relatively easy to solve.
The significance of solving the problems and the defects is as follows: the registration scheme has more excellent universality, simplicity, economy and practicability.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a high-precision mapping positioning method by combining vehicle-mounted L idar and an unmanned aerial vehicle, and the problem of registration of vehicle-mounted lidar point cloud data and unmanned aerial vehicle data is solved.
The invention is realized in such a way that a high-precision mapping positioning method combining vehicle-mounted L idar and an unmanned aerial vehicle comprises the following steps:
step one, extracting a representative feature point set from vehicle-mounted lidar point cloud data, and fitting a small surface element on a road surface by using each representative feature point set meeting the requirements;
extracting feature points of each unmanned aerial vehicle image, performing feature matching between the images to obtain homonymous feature points between the unmanned aerial vehicle images, and performing gross error elimination on the feature points which are matched in error by using a RANSAC method;
step three, after the vehicle-mounted lidar point cloud data completes small surface element extraction work and unmanned aerial vehicle image feature point extraction and matching, a one-to-many or one-to-one mapping relation is established between the small surface elements and feature points on the unmanned aerial vehicle image;
eliminating rigid deformation of the vehicle-mounted laser lidar point cloud and the whole measurement area of the unmanned aerial vehicle image;
and step five, after rigid registration is completed, adjusting the deformation and dislocation of the interior of the unmanned aerial vehicle image, so that the deformation and dislocation of the interior of the unmanned aerial vehicle image are matched with the vehicle-mounted laser lidar point cloud.
Further, in the step one, the representative feature point set includes:
(1) a certain point P exists in the point set, so that the Euclidean distance from any point Q in the point set to P is smaller than a specific threshold value;
(2) after the whole point cloud is subjected to semantic segmentation, all points in the point set belong to the same semantic set;
(3) there is a plane x such that the distribution of distances from all points in the set of points to the plane is averaged to 0 and has a variance σ2And σ is less than a certain threshold.
Further, in the first step, the method for extracting the specific point set adopts a seed characteristic point method and a region growing method, and specifically includes:
1) comprehensively utilizing the geometrical characteristics of the point cloud and the optical characteristics of the intensity image to extract a plurality of seed characteristic points:
2) all points in the neighborhood with the distance d of the seed characteristic points are classified into a point set to finish point set extraction;
3) after extracting the point set, performing plane fitting by adopting a RANSAC method to obtain a series of initial small surface elements; after the initial small surface elements are obtained, a plurality of groups of small surface element seed point distances are smaller than a threshold value, the normal vector included angle is smaller than the threshold value, the small surface elements are combined, and iteration is carried out for multiple times, so that the stable state is finally achieved.
Further, in the third step, the method for constructing the mapping relationship includes:
i) according to the initial position information of each image and the frame size of the image, the edge key points of the small surface element extracted from the vehicle-mounted lidar point cloud data are back-projected onto each image according to a collinear equation;
ii) connecting key points reversely projected on the unmanned aerial vehicle image according to the connection sequence on the lighting point cloud, and recovering the projection of the small surface element on the unmanned aerial vehicle image;
iii) judging whether each feature point on each image falls into the projection of a certain small surface element, and if so, establishing a temporary corresponding relation between the small surface element and the certain feature points;
for all images, if all homonymous points of a certain feature point fall into the projections of the same small surface element on different images, establishing the corresponding relation of the homonymous feature point pair and the small surface element;
establishing temporary feature point pairs for all the projections of all the images and the small surface elements, reserving homonymous feature point pairs which can establish corresponding relations with the small surface elements, removing other feature points, and establishing one-to-many mapping relations between the small surface elements and the homonymous feature point pairs on the images;
the collinearity equation is:
the equation expresses a mathematical relation that three points of an object point, an image point and a projection center (generally, a lens center for an image) are positioned on a straight line, is one of the most basic formulas in photogrammetry, and has the characteristics of simple model, high accuracy, small calculation amount, suitability for large-scale calculation and the like.
Further, in the fourth step, the method for rigid registration and gross error rejection includes:
performing front intersection, gross error elimination, Euler angle recovery, external parameter updating and iteration until the change of the Euler angle is smaller than a threshold value;
the method of front meeting comprises: solving the three-dimensional space coordinates of each object point corresponding to each characteristic point pair establishing a mapping relation with the small surface element in the same coordinate system as the point cloud data of the vehicle-mounted lidar by using the position and the posture of each image and using a collinear equation;
the gross error rejection method comprises the following steps: a) threshold filteringAnd eliminating points with the distance between the small elements and larger than theta, wherein theta satisfies the following conditions: theta 3muavWherein m isuavCalculating by using the GPS precision of the unmanned aerial vehicle image;
b) further performing gross error rejection by using RANSAC;
the Euler angle recovery method comprises the following steps: the geometric center of utilizing whole unmanned aerial vehicle to survey the district replaces the focus of empty three points, and the convergence condition that satisfies is that the sum of the European distance of empty three points to little surface element is minimum, and the mathematical expression is:
wherein, distance is to find the empty three points after rotation transformationiDistance to the corresponding small bin; when setting an initial value, setting the initial value of the Euler angle to 0 when starting iteration; the Euclidean distance is used as a cost function to guarantee the balance of the calculation speed and accuracy, the RANSAC method is adopted to guarantee that rigid registration has strong robustness, high-precision parameters can be estimated from a large number of coarse difference points, and the convergence is good.
The iterative method comprises the following steps: and updating the external parameter once for each solved Euler angle, reestablishing a mapping relation after updating, and carrying out rigid registration for the second time until the change of the Euler angles solved by two times of iteration is smaller than a threshold value.
Further, in the fifth step, the position and the posture of each unmanned aerial vehicle image are adjusted by utilizing vehicle-mounted laser lidar data, and meanwhile, the camera parameters of the unmanned aerial vehicle camera are calibrated.
Further, the fifth step specifically includes:
(I) adjustment of the area network by a beam method: using each light beam as a basic adjustment unit, listing error equations according to collinear condition equations, uniformly performing adjustment processing in the whole area, and solving each light beamExterior orientation elements of the image. The adjustment unit is a single light beam serving as an adjustment basic unit and a characteristic point coordinate (x)i,yi) The mean difference observations. Ground point coordinate (X) satisfying intersection of homonymous rayst,Yt,Zt) The coordinates are equal; for each feature point of each pair of homonymous feature points, satisfying:
costi,1=(Δxi 2+Δyi 2)
cost1=∑costi,1
wherein the whole measuring region has n +1 images;
(II) camera parameter calibration: when the adjustment of the area network by the beam method is carried out, the focal length and the imaging center coordinate of the camera are brought into the whole adjustment system; for the generation of distorted image passes k0,k1,k3,p0,p1The parameters are described as follows:
ri 2=(xi 2+yi 2)
finally, the optimization solution objective becomes the equation:
(III) control with lidar: under the condition of no error, all homonymous feature point pairs are positioned on a finite plane where the corresponding small surface element is positioned; adjusting internal and external parameters of the unmanned aerial vehicle image according to the best direction of overall fit of the homonymy feature point pair and the small surface element; using a segmentation direction weighted distance as a cost function, wherein the cost function is shown as the following formula for each feature point pair:
costj,2=I0(Δdxj 2+Δdyj 2+3Δdzj 2)+2000I1(Δdxj 2+Δdyj 2+3Δdzj 2)
wherein Δ dxj,Δdyj,ΔdzjThe distances from the empty three points to the limited plane of the corresponding small element in the X, Y and Z directions, I0,I1To indicate the function, the following equation is shown:
where μ is the median error of the empty three points after rigid registration. For all images of the survey area, the cost function is given by:
cost2=∑costj,2
(IV) Overall adjustment by regularization and weighting, cost in regularization1As a distance in image space, cost2Cost is the distance in the object space2In the image space conversion, the conversion mode is as follows:
cost2′=cost2/gsd
gsd is the ground sampling distance, and the navigation height and the focal length are estimated by the following formula;
for cost1、cost2Different weights are given, and the overall cost function formula is shown as follows:
cost=P1cost1+(P2cost2)/gsd
wherein P is1,P2Respectively, the reciprocal of the internal precision of the unmanned aerial vehicle image after the last registration and the reciprocal of the lidar point cloud precision, wherein the whole non-rigid registration optimization problem is shown as the following formula:
and updating the external parameter once for each solution, reestablishing a mapping relation after updating, performing non-rigid registration for the second time, and iterating until the change of the external parameter of the image solved by two iterations is smaller than a threshold value.
Another purpose of the invention is to provide a unmanned aerial vehicle for implementing the high-precision mapping positioning method by using vehicle-mounted L idar and unmanned aerial vehicle combination.
It is another object of the present invention to provide a program storage medium for receiving user input, the stored computer program causing an electronic device to execute the method for high-precision map positioning using vehicle-mounted L idar in combination with a drone.
It is another object of the present invention to provide a computer program product stored on a computer readable medium, comprising a computer readable program for providing a user input interface to implement the method for high-precision map localization in conjunction with drones using vehicle-mounted L idar when executed on an electronic device.
By combining all the technical schemes, the invention has the advantages and positive effects that:
the method is mainly applied to the road pavement, so that a large number of planes can be extracted from the point cloud data very easily, and aiming at the characteristic of the pavement data and the real data quality, the method takes the vehicle-mounted lidar point cloud data and unmanned aerial vehicle image feature point data as main processing objects, takes the position and attitude information of an unmanned aerial vehicle image as assistance, and adopts a weak constraint registration method based on the small surface element semantic features to achieve the expected purpose.
The invention designs a registration mode suitable for the unmanned aerial vehicle image and the vehicle-mounted lidar point cloud data aiming at the characteristics of the unmanned aerial vehicle image and the vehicle-mounted lidar point cloud data, and the method has the characteristics of high speed, high precision and capability of realizing non-rigid registration in the registration process of the unmanned aerial vehicle image and the vehicle-mounted lidar point cloud.
Compared with the prior art, the method has the advantages that the characteristic set is extracted from vehicle-mounted point cloud data to obtain a control surface element, the characteristic points of the unmanned aerial vehicle images subjected to initial orientation processing are extracted and matched, the homonymous characteristic points of the unmanned aerial vehicle images are obtained, rough difference elimination is conducted on mismatching points by using a robust estimation method, one-to-many or one-to-one mapping relation is established between the characteristic surface elements and the characteristic points of the unmanned aerial vehicle images, the unmanned aerial vehicle images and camera external parameters are further refined and solved through adjustment iteration of a light beam method to obtain high-precision azimuth elements and parameters, rigid deformation of the vehicle-mounted laser point cloud and the whole measurement area of the unmanned aerial vehicle images is further reduced, the speed and the precision are high, the absolute positioning precision can be improved to within 5cm and 10cm in the plane through external field control point checking, and a low-cost technical solution is provided for the unmanned aerial vehicle and L idar combined high-precision map acquisition.
The effects and advantages of the present invention obtained by combining experimental or experimental data with the comparison of the prior art further include:
compared with a point cloud and image registration scheme (scheme 1 is shown in figure 10) and a pure photogrammetry scheme (scheme 2 is shown in figure 11) based on point cloud registration, the method adopts two groups of data to carry out a comparison test, and experimental areas are shown in figures 10-11.
The external accuracy of the three-dimensional model after registration is shown in the following table:
the final three-dimensional model of the invention is shown in fig. 12-13.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained from the drawings without creative efforts.
Fig. 1 is a flowchart of a high-precision mapping positioning method by using vehicle-mounted L idar and unmanned aerial vehicle combination according to an embodiment of the invention.
Fig. 2 is a schematic diagram of a high-precision mapping positioning method combining vehicle-mounted L idar and an unmanned aerial vehicle according to an embodiment of the invention.
FIG. 3 is a diagram of an exemplary small surface element provided by an embodiment of the present invention
Fig. 4 is a flow chart of small voxel extraction according to an embodiment of the present invention.
Fig. 5 is a flowchart of image feature point extraction and matching according to an embodiment of the present invention.
Fig. 6 is a diagram of feature point pairs on different images corresponding to a small facet according to an embodiment of the present invention.
Fig. 7 is a flowchart of mapping relationship establishment according to an embodiment of the present invention.
Fig. 8 is a flow chart of rigid registration provided by an embodiment of the present invention.
Fig. 9 is a schematic diagram of a normal equation coefficient array structure provided in the embodiment of the present invention.
Fig. 10 is an effect diagram of a registration scheme 1 for performing point cloud and image registration based on point cloud registration in the prior art according to an embodiment of the present invention.
Fig. 11 is an effect diagram of a pure photogrammetry scheme 2 in a registration scheme for point clouds and images based on registration between point clouds in the prior art provided by an embodiment of the present invention.
FIG. 12 is a three-dimensional model map of FIG. 10 after registration according to an embodiment of the present invention.
FIG. 13 is a three-dimensional model map of FIG. 11 after registration according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the prior art, the vehicle-mounted lidar data and the unmanned aerial vehicle image data cannot be brought into the same geographic coordinate frame, and the registration between the image and the point cloud is poor.
Aiming at the problems in the prior art, the invention provides a high-precision mapping positioning method by combining a vehicle-mounted L idar and an unmanned aerial vehicle, and the invention is described in detail below with reference to the accompanying drawings.
As shown in FIG. 1, the high-precision mapping positioning method combining the vehicle-mounted L idar and the unmanned aerial vehicle provided by the embodiment of the invention comprises the following steps:
s101, small voxel extraction: and extracting a representative characteristic point set from the vehicle-mounted lidar point cloud data, and fitting a small surface element on the road surface by using each representative characteristic point set meeting the requirements.
S102, unmanned aerial vehicle image feature point extraction and matching: extracting feature points of each unmanned aerial vehicle image, performing feature matching between the images to obtain homonymous feature points between the unmanned aerial vehicle images, and performing gross error elimination on the feature points which are mismatched by using a RANSAC method.
S103, constructing a mapping relation: after the vehicle-mounted lidar point cloud data completes small surface element extraction work and unmanned aerial vehicle image feature point extraction and matching, a one-to-many or one-to-one mapping relation is established between the small surface elements and the feature points on the unmanned aerial vehicle image.
S104, rigid registration and gross error elimination: and rigid deformation of the vehicle-mounted laser lidar point cloud and the whole measurement area of the unmanned aerial vehicle image is eliminated.
S105, non-rigid registration and camera parameter calibration: after rigid registration is completed, deformation and dislocation inside the unmanned aerial vehicle image are adjusted, and the deformation and dislocation inside the unmanned aerial vehicle image are matched with the vehicle-mounted laser lidar point cloud.
Fig. 2 is a schematic diagram of a high-precision mapping positioning method combining vehicle-mounted L idar and an unmanned aerial vehicle according to an embodiment of the invention.
In the step S101, the main purpose of the step is to extract a lot of characteristic point sets with more obvious representativeness from vehicle-mounted lidar point cloud data, each point set is characterized in that ① point sets have a point P, the Euclidean distance from any point Q to P in the point set is smaller than a specific threshold value, ② performs semantic segmentation on the whole point cloud, all points in the point set belong to the same semantic set, ③ has a plane x, the distance distribution from all points in the point set to the plane is 0 in average value, and the variance is sigma2And σ is less than a certain threshold. In summary, a plane can be fitted with each set of points that satisfies the above requirements. If a certain point set is exactly a subset of a set formed by all points falling on a road surface of a certain road, a bounded plane fitted by the point set can be regarded as a local part of the actual road, and then the bounded plane is fitted to the roadIn the following description, such a finite plane is referred to as a small surface element on the road surface (fig. 3), and any point in the set of all points falling on the road surface of a certain road is referred to as a road surface point on the road.
The mode of extracting a specific point set adopts a mode of seed characteristic point method and region growth, and the main idea is that if any point is a road point on a certain road, the probability of non-road points in the neighborhood range with the distance d is as follows:
since D is the road surface width, it can be seen that when D > > D, p (a | B)1, when D is smaller than a certain threshold, it can be considered that when a certain point is a road surface point, all points in the neighborhood range whose distance is D are road surface points. Therefore, when extracting the point set, firstly, the geometric features of the point cloud and the optical features of the intensity image can be comprehensively utilized to extract a plurality of seed feature points, and then, all the points in the neighborhood with the distance d of the seed feature points are classified into a point set, thereby finishing the point set extraction work. After the point set is extracted, plane fitting is carried out by adopting a RANSAC method, so that a series of initial small surface elements are obtained. After the initial small surface elements are obtained, if a plurality of groups of small surface element seed points are close to each other (the distance is smaller than the threshold), the normal vectors are approximately collinear (the included angle is smaller than the threshold), the small surface elements can be combined, and the process is iterated for multiple times, so that a stable state is finally achieved. The main flow chart is shown in fig. 4.
In step S102, feature point extraction is mainly performed on each unmanned aerial vehicle image, feature matching is performed between the images to find homonymic feature points between the unmanned aerial vehicle images, then coarse error elimination is performed on the feature points that are mismatched by using a RANSAC method, the whole process is a very mature process in photogrammetry space-three encryption, and the process is shown in fig. 5.
In step S103, after completing the small bin extraction work and the extraction and matching of the feature points of the unmanned aerial vehicle image on the vehicle-mounted lidar point cloud data, a one-to-many or one-to-one mapping relationship needs to be established between the small bins and the feature points on the unmanned aerial vehicle image, and the main method is that, first, according to the initial position information of each image and the frame size of the image, the edge key points of the small bins extracted from the appropriate vehicle-mounted lidar point cloud data are back-projected onto each image according to the principle of the collinearity equation (formula 2). Secondly, connecting key points reversely projected on the unmanned aerial vehicle image according to the connection sequence on the lighting point cloud, so as to recover the projection of the small surface element on the unmanned aerial vehicle image, and referring to the projection of the small surface element on the image as the projection of the small surface element in the subsequent expression. Finally, judging whether each feature point on each image falls into the projection of a certain small surface element, if so, establishing a temporary corresponding relation between the small surface element and certain feature points (fig. 6), meanwhile, for all images, if all the same-name points of a certain feature point fall into the projections of the same small surface element on different images, establishing the corresponding relation between the same-name feature point pair and the small surface element, and for all the feature points establishing the temporary corresponding relation with the projection of the small surface element of all the images, reserving the same-name feature point pairs capable of establishing the corresponding relation with the small surface element, and rejecting the rest feature points, thereby establishing the one-to-many mapping relation between the small surface element and the same-name feature point pairs on the images. The flow chart is shown in fig. 7.
In step S104, the main purpose of this step is to eliminate the rigid deformation of the vehicle-mounted laser lidar point cloud and the whole measurement area of the unmanned aerial vehicle image, the main link is based on a rigid registration classical algorithm — iterative closest approximation (ICP), the main processing link has a forward intersection, gross error rejection, euler angle recovery, external parameters are updated, the whole process is iterated until the change of the euler angle is smaller than a threshold, and the flowchart is as shown in fig. 8.
(4.1)Front meetingThe main process of the link is to use the position and the posture of each image and the principle of the collinearity equation (formula 2) to carry out the pairEach feature point pair establishing a mapping relation with the small face element solves the three-dimensional space coordinates of the corresponding object point under the coordinate system same as the vehicle-mounted lidar point cloud data, and the process is a very mature process in photogrammetry space-three encryption. The spatial points solved by the above process are referred to as empty three points in the following description. In the absence of errors, the three empty points must be strictly located on the finite plane where their corresponding small elements are located.
(4.2)Gross error rejectionThe method comprises the following steps of firstly, threshold filtering, wherein in the actual operation process, due to the existence of projection difference or phenomena such as mismatching, some empty three points may not be located on a finite plane where small plane elements are located, so that threshold filtering can be performed on the empty three points before registration, points with a distance greater than theta away from the small plane elements can be regarded as gross error points, and vehicle-mounted lidar point cloud data can be used as a true value in the link, so that theta is satisfied: theta 3muavWherein m isuavFor utilizing unmanned aerial vehicle image GPS precision to calculate. Secondly, the RANSAC is used for further removing gross errors. After the two steps, it can be considered that the remaining point pairs should be strictly located on the small bin without error.
(4.3)Euler angle recoveryThe link is mainly based on an iteration nearest neighbor method, originally, in an ICP algorithm, point cloud needs to be centralized, but due to the fact that the three points are incomplete and the density of the point cloud is uneven, the geometric center of a whole unmanned aerial vehicle measuring area is used for replacing the gravity center of the three points, the convergence condition finally met is that the sum of Euclidean distances from the three points to a small surface element is the minimum, and the mathematical expression is as follows:
wherein, the distance method is to calculate the empty three points after the rotation transformationiTo its corresponding small binThe distance of (c). In the actual operation process, because the current hardware equipment can make the position and the attitude data of the unmanned aerial vehicle have equivalent precision, when setting an initial value, the initial value of the euler angle can be set to 0 at the beginning of iteration, and the convergence of the unmanned aerial vehicle can be ensured.
(4.4)IterationAnd the process is iterated, the external parameter is updated once for each solved Euler angle, the mapping relation needs to be reestablished after the updating, the rigid registration is carried out for the second time, and the iteration is carried out until the change of the Euler angles solved by two iterations is smaller than the threshold value. In practice, typically three to five iterations ensure convergence.
In step S105, after rigid matching is completed, deformation and dislocation inside the unmanned aerial vehicle image are adjusted, so that the unmanned aerial vehicle image can be matched with the vehicle-mounted laser lidar point cloud, and therefore, the main purpose is to adjust the position and posture of each unmanned aerial vehicle image by using vehicle-mounted laser lidar data, and simultaneously check and correct camera parameters of an unmanned aerial vehicle camera, the process is performed iteratively, the main idea is self-checking beam method adjustment with control conditions, a normal equation coefficient matrix is a flanged strip matrix, and a structural schematic diagram is shown in fig. 9.
(5.1) Beam method area network adjustment
In the adjustment link of the area network by the beam method, each beam is used as a basic adjustment unit, an error equation is listed according to a collinear condition equation, adjustment processing is uniformly carried out in the whole area, and the external orientation element of each image is solved. The adjustment unit is a single light beam serving as an adjustment basic unit and a characteristic point coordinate (x)i,yi) The mean difference observations. Ground point coordinate (X) satisfying intersection of homonymous rayst,Yt,Zt) The coordinates are equal. For each feature point of each pair of homonymous feature points, the satisfaction thereof is
costi,1=(Δxi 2+Δyi 2) (8)
cost1=∑costi,1(9)
Wherein the total number of n +1 images is measured.
(5.2) calibration parameters of Camera
In the actual operation process, as the number of times of use increases, the focal length of the camera and the position of the principal point change, so that a certain system error is generated for the three-in-one result, and the final registration accuracy is affected. Therefore, when the zonal net adjustment is performed by the beam method, the focal length and the imaging center coordinate of the camera need to be incorporated into the whole adjustment system. When the unmanned aerial vehicle images are shot, the images are distorted due to the lens, the radial distortion and the tangential distortion are mainly considered, and the k is mainly passed through0,k1, k3,p0,p1Five parameters are described, as shown in formulas (10-13):
in summary, in combination with the camera calibration parameters, equation (6) becomes:
finally, the optimization solution objective becomes equation (15):
(5.3) control by lidar
In the process, the unmanned aerial vehicle image data is controlled by using the mapping relation between the optimized small surface element and the homonymy characteristic point pair established and optimized in the rigid registration process, and the main idea is that all homonymy characteristic point pairs are strictly positioned on the finite plane where the corresponding small surface element is positioned under the condition of no error. Therefore, for adjusting the internal and external parameters of the unmanned aerial vehicle image, the best direction of overall fitting with the small surface element should be adjusted according to the same-name characteristic points. When the cost function is set, the precision of the image data of the unmanned aerial vehicle is greatly improved through a rigid registration link, so that the vehicle-mounted laser lidar data cannot be approximately regarded as a true value. Due to the characteristics of the lidar data, the precision of the lidar data in the depth direction is far higher than that of the lidar data in the plane direction, the vehicle-mounted lidar data can be approximately considered to be in the depth direction, namely the Z direction, for the unmanned aerial vehicle image, the precision of the Z coordinate of the three empty points is lower than that of the plane coordinate and is generally about 3-5 times, therefore, the distance weighted in the segmentation direction is considered to be used as a cost function, and for each characteristic point pair, the cost function is as shown in the formula (16):
costj,2=I0(Δdxj 2+Δdyj 2+3Δdzj 2)+2000I1(Δdxj 2+Δdyj 2+3Δdzj 2) (16)
wherein Δ dxj,Δdyj,ΔdzjThe distances from the empty three points to the limited plane of the corresponding small element in the X, Y and Z directions, I0,I1As an indicator function, the following equations (17-18) are shown:
where μ is the median error of the empty three points after rigid registration. For all images of the region, the cost function is shown in equation (19):
cost2=∑costj,2(19)
(5.4) Total adjustment
For the whole non-rigid adjustment link, two cost functions need to be considered in general, and the main work is two: regularization and weighting, the regularization being mainly due to cost1And cost2Unit is different, cost1Mainly describes the distance in image space, and cost2The distance in object space is described,
therefore, it is necessary to add cost2In the image space conversion, the conversion mode is as follows:
cost2′=cost2/gsd (20)
gsd is the ground sampling distance, using the altitude and the focal length to estimate (equation (21))
And because the point cloud precision of the vehicle-mounted laser lidar is similar to but different from the image precision of the unmanned aerial vehicle, different weights need to be given to the point cloud precision and the image precision of the unmanned aerial vehicle, so that the whole cost function is as shown in formula (22):
cost=P1cost1+(P2cost2)/gsd (22)
wherein P is1,P2Respectively, the reciprocal of the internal precision of the unmanned aerial vehicle image after the last registration and the reciprocal of the lidar point cloud precision, so that the whole non-rigid registration optimization problem is the formula (23)Shown in the figure:
the above process is performed iteratively, the external parameter is updated once for each solution, the mapping relation needs to be re-established after the updating, the non-rigid registration is performed for the second time, and the iteration is performed until the change of the external parameter of the image obtained by two times of iteration solution is smaller than the threshold value. In practice, iteration of one to two times generally ensures convergence. Thereby completing the entire registration process.
The invention is further described below in connection with specific experimental or simulation results.
Compared with a point cloud and image registration scheme (scheme 1 is shown in figure 10) and a pure photogrammetry scheme (scheme 2 is shown in figure 11) based on point cloud registration, the method adopts two groups of data to carry out a comparison test, and experimental areas are shown in figures 10-11.
The external accuracy of the three-dimensional model after registration is shown in the following table:
the final three-dimensional model of the invention is shown in fig. 12-13.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present invention may be implemented by software plus a necessary hardware platform, and may also be implemented by hardware entirely. With this understanding in mind, all or part of the technical solutions of the present invention that contribute to the background can be embodied in the form of a software product, which can be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes instructions for causing a computer device (which can be a personal computer, a server, or a network device, etc.) to execute the methods according to the embodiments or some parts of the embodiments of the present invention.
The above description is only for the purpose of illustrating the present invention and the appended claims are not to be construed as limiting the scope of the invention, and any modifications, equivalents and improvements made by those skilled in the art within the spirit and principle of the present invention are intended to be covered by the present invention.
Claims (10)
1. A high-precision mapping positioning method combining a vehicle-mounted L idar and a unmanned aerial vehicle is characterized by comprising the following steps:
step one, extracting a representative characteristic point set from vehicle-mounted point cloud data, and fitting a small surface element on a road surface by using each characteristic point set which meets requirements and is representative;
extracting characteristic points of each unmanned aerial vehicle image, performing characteristic matching between the images to obtain homonymous characteristic points between the unmanned aerial vehicle images, and performing gross error elimination on the characteristic points which are mismatched by using a RANSAC method;
step three, after the vehicle-mounted point cloud data completes small surface element extraction work and unmanned aerial vehicle image feature point extraction and matching, a one-to-many or one-to-one mapping relation is established between the small surface elements and feature points on the unmanned aerial vehicle image;
eliminating rigid deformation of the vehicle-mounted point cloud and the whole measurement area of the unmanned aerial vehicle image;
and step five, after rigid registration is completed, adjusting the deformation and dislocation of the interior of the unmanned aerial vehicle image, so that the deformation and dislocation of the interior of the unmanned aerial vehicle image and the vehicle-mounted point cloud are subjected to non-rigid registration.
2. A high-precision map positioning method using vehicle-mounted L idar in combination with a drone as claimed in claim 1, wherein in the step one, the representative feature point set comprises:
(1) a point P exists, so that the Euclidean distance from any point Q in the point set to P is smaller than a specific threshold value;
(2) after the whole point cloud is subjected to semantic segmentation, all points in the point set belong to the same semantic set;
(3) there is a plane x such that the distribution of distances from all points in the set of points to the plane is averaged to 0 and has a variance σ2And σ is less than a certain threshold.
3. The high-precision mapping positioning method by combining vehicle-mounted L idar and the unmanned aerial vehicle as claimed in claim 1, wherein in the first step, the method for extracting the specific point set adopts a seed feature point method plus a region growing method, and specifically comprises the following steps:
1) comprehensively utilizing the geometrical characteristics of the point cloud and the optical characteristics of the intensity image to extract a plurality of seed characteristic points:
2) all points in the neighborhood with the distance d of the seed characteristic points are classified into a point set to finish point set extraction;
3) after extracting the point set, performing plane fitting by adopting a RANSAC method to obtain a series of initial small surface elements; after the initial small surface elements are obtained, a plurality of groups of small surface element seed point distances are smaller than a threshold value, the normal vector included angle is smaller than the threshold value, the small surface elements are combined, and iteration is carried out for multiple times, so that the stable state is finally achieved.
4. A high-precision mapping positioning method using vehicle-mounted L idar in combination with a drone according to claim 1, wherein in the third step, the method for constructing the mapping relationship comprises:
i) according to the initial position information of each image and the frame size of the image, the edge key points of the small surface element extracted from the vehicle-mounted lidar point cloud data are back-projected onto each image according to a collinear equation;
ii) connecting key points reversely projected on the unmanned aerial vehicle image according to the connection sequence on the lighting point cloud, and recovering the projection of the small surface element on the unmanned aerial vehicle image;
iii) judging whether each feature point on each image falls into the projection of a certain small surface element, and if so, establishing a temporary corresponding relation between the small surface element and the certain feature points;
for all images, if all homonymous points of a certain feature point fall into the projections of the same small surface element on different images, establishing the corresponding relation of the homonymous feature point pair and the small surface element;
establishing temporary feature point pairs for all the projections of all the images and the small surface elements, reserving homonymous feature point pairs which can establish corresponding relations with the small surface elements, removing other feature points, and establishing one-to-many mapping relations between the small surface elements and the homonymous feature point pairs on the images;
the collinearity equation is:
5. a high-precision map positioning method using vehicle-mounted L idar in combination with a drone according to claim 1, wherein in step four, the rigid registration and gross error rejection method comprises:
performing front intersection, gross error elimination, Euler angle recovery, external parameter updating and iteration until the change of the Euler angle is smaller than a threshold value;
the method of front meeting comprises: solving the three-dimensional space coordinates of each object point corresponding to each characteristic point pair establishing a mapping relation with the small surface element in the same coordinate system as the point cloud data of the vehicle-mounted lidar by using the position and the posture of each image and using a collinear equation;
the gross error rejection method comprises the following steps: a) and (3) threshold filtering, namely removing points with the distance between the small elements greater than theta, wherein theta satisfies the following conditions: theta 3muavWherein m isuavCalculating by using the GPS precision of the unmanned aerial vehicle image;
b) further performing gross error rejection by using RANSAC;
the Euler angle recovery method comprises the following steps: the geometric center of utilizing whole unmanned aerial vehicle to survey the district replaces the focus of empty three points, and the convergence condition that satisfies is that the sum of the European distance of empty three points to little surface element is minimum, and the mathematical expression is:
wherein, distance is to find the empty three points after rotation transformationiDistance to the corresponding small bin; when setting an initial value, setting the initial value of the Euler angle to 0 when starting iteration;
the iterative method comprises the following steps: and updating the external parameter once for each solved Euler angle, reestablishing a mapping relation after updating, and carrying out rigid registration for the second time until the change of the Euler angles solved by two times of iteration is smaller than a threshold value.
6. A high-precision mapping positioning method using vehicle-mounted L idar to combine with unmanned aerial vehicle as claimed in claim 1, wherein in step five, the position and posture of each unmanned aerial vehicle image is adjusted by using vehicle-mounted laser lidar data, and meanwhile, the camera parameters of the unmanned aerial vehicle camera are calibrated and iterated.
7. The high-precision map positioning method by utilizing vehicle-mounted L idar in combination with a drone as claimed in claim 6, wherein the fifth step specifically comprises:
(I) adjustment of the area network by a beam method: and taking each light beam as a basic adjustment unit, listing an error equation according to a collinear condition equation, uniformly performing adjustment processing in the whole area, and solving the external orientation element of each image. The adjustment unit is a single light beam serving as an adjustment basic unit and a characteristic point coordinate (x)i,yi) The mean difference observations. Ground point coordinate (X) satisfying intersection of homonymous rayst,Yt,Zt) The coordinates are equal; for each feature point of each pair of homonymous feature points, satisfying:
costi,1=(Δxi 2+Δyi 2)
cost1=∑costi,1
wherein the whole measuring region has n +1 images;
(II) camera parameter calibration: when the adjustment of the area network is performed by the beam method, the focal length and the imaging center coordinate of the camera are brought into the whole adjustment system; for the generation of distorted image passes k0,k1,k3,p0,p1The parameters are described as follows:
finally, the optimization solution objective becomes the equation:
(III) control with lidar: using a segmentation direction weighted distance as a cost function, wherein the cost function is shown as the following formula for each feature point pair:
costj,2=I0(Δdxj 2+Δdyj 2+3Δdzj 2)+2000I1(Δdxj 2+Δdyj 2+3Δdzj 2)
wherein Δ dxj,Δdyj,ΔdzjThe distances from the empty three points to the limited plane of the corresponding small element in the X, Y and Z directions, I0,I1To indicate the function, the following equation is shown:
where μ is the median error of the empty three points after rigid registration. For all images of the survey area, the cost function is given by:
cost2=∑costj,2
(IV) Overall adjustment by regularization and weighting, cost in regularization1As a distance in image space, cost2Cost is the distance in the object space2In the image space conversion, the conversion mode is as follows:
cost2′=cost2/gsd
gsd is the ground sampling distance, and the navigation height and the focal length are estimated by the following formula;
for cost1、cost2Different weights are given, and the overall cost function formula is shown as follows:
cost=P1cost1+(P2cost2)/gsd
wherein P is1,P2Respectively, the reciprocal of the internal precision of the unmanned aerial vehicle image after the last registration and the reciprocal of the lidar point cloud precision, wherein the whole non-rigid registration optimization problem is shown as the following formula:
and updating the external parameter once for each solution, reestablishing a mapping relation after updating, performing non-rigid registration for the second time, and iterating until the change of the external parameter of the image solved by two iterations is smaller than a threshold value.
8. A unmanned aerial vehicle for implementing the high-precision mapping positioning method by using the vehicle-mounted L idar and the unmanned aerial vehicle in combination according to any one of claims 1 to 7.
9. A program storage medium for receiving user input, the stored computer program causing an electronic device to perform the method of high-precision map positioning using on-board L idar in conjunction with a drone of any one of claims 1 to 7.
10. A computer program product stored on a computer readable medium, comprising a computer readable program for providing a user input interface for implementing a method of high-precision map localization using on-board L idar in conjunction with a drone as claimed in any one of claims 1 to 7 when executed on an electronic device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010252893.4A CN111457930B (en) | 2020-04-02 | 2020-04-02 | High-precision mapping positioning method by combining vehicle-mounted Lidar and unmanned aerial vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010252893.4A CN111457930B (en) | 2020-04-02 | 2020-04-02 | High-precision mapping positioning method by combining vehicle-mounted Lidar and unmanned aerial vehicle |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111457930A true CN111457930A (en) | 2020-07-28 |
CN111457930B CN111457930B (en) | 2021-11-23 |
Family
ID=71684425
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010252893.4A Active CN111457930B (en) | 2020-04-02 | 2020-04-02 | High-precision mapping positioning method by combining vehicle-mounted Lidar and unmanned aerial vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111457930B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112268541A (en) * | 2020-10-16 | 2021-01-26 | 中国有色金属长沙勘察设计研究院有限公司 | Three-dimensional space detection method |
CN112465732A (en) * | 2020-11-27 | 2021-03-09 | 武汉大学 | Registration method of vehicle-mounted laser point cloud and sequence panoramic image |
CN115511932A (en) * | 2022-09-29 | 2022-12-23 | 北京银河方圆科技有限公司 | Registration method based on medical image, readable storage medium and electronic device |
CN116222592A (en) * | 2023-03-03 | 2023-06-06 | 北京数字政通科技股份有限公司 | High-precision map generation method and system based on multi-source data |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130242285A1 (en) * | 2012-03-15 | 2013-09-19 | GM Global Technology Operations LLC | METHOD FOR REGISTRATION OF RANGE IMAGES FROM MULTIPLE LiDARS |
CN103744086A (en) * | 2013-12-23 | 2014-04-23 | 北京建筑大学 | High-precision registration method for ground laser radar and close-range photography measurement data |
CN103927731A (en) * | 2014-05-05 | 2014-07-16 | 武汉大学 | Low-altitude remote sensing image rapid and automatic splicing method without POS assisting |
CN104123730A (en) * | 2014-07-31 | 2014-10-29 | 武汉大学 | Method and system for remote-sensing image and laser point cloud registration based on road features |
CN104794743A (en) * | 2015-04-27 | 2015-07-22 | 武汉海达数云技术有限公司 | Color point cloud producing method of vehicle-mounted laser mobile measurement system |
CN105069843A (en) * | 2015-08-22 | 2015-11-18 | 浙江中测新图地理信息技术有限公司 | Rapid extraction method for dense point cloud oriented toward city three-dimensional modeling |
US20170206648A1 (en) * | 2016-01-20 | 2017-07-20 | Ez3D, Llc | System and method for structural inspection and construction estimation using an unmanned aerial vehicle |
CN109115186A (en) * | 2018-09-03 | 2019-01-01 | 山东科技大学 | A kind of 360 ° for vehicle-mounted mobile measuring system can measure full-view image generation method |
CN110006408A (en) * | 2019-04-17 | 2019-07-12 | 武汉大学 | LiDAR data " cloud control " aviation image photogrammetric survey method |
CN110021039A (en) * | 2018-11-15 | 2019-07-16 | 山东理工大学 | The multi-angle of view material object surface point cloud data initial registration method of sequence image constraint |
CN110443837A (en) * | 2019-07-03 | 2019-11-12 | 湖北省电力勘测设计院有限公司 | City airborne laser point cloud and aviation image method for registering and system under a kind of line feature constraint |
-
2020
- 2020-04-02 CN CN202010252893.4A patent/CN111457930B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130242285A1 (en) * | 2012-03-15 | 2013-09-19 | GM Global Technology Operations LLC | METHOD FOR REGISTRATION OF RANGE IMAGES FROM MULTIPLE LiDARS |
CN103744086A (en) * | 2013-12-23 | 2014-04-23 | 北京建筑大学 | High-precision registration method for ground laser radar and close-range photography measurement data |
CN103927731A (en) * | 2014-05-05 | 2014-07-16 | 武汉大学 | Low-altitude remote sensing image rapid and automatic splicing method without POS assisting |
CN104123730A (en) * | 2014-07-31 | 2014-10-29 | 武汉大学 | Method and system for remote-sensing image and laser point cloud registration based on road features |
CN104794743A (en) * | 2015-04-27 | 2015-07-22 | 武汉海达数云技术有限公司 | Color point cloud producing method of vehicle-mounted laser mobile measurement system |
CN105069843A (en) * | 2015-08-22 | 2015-11-18 | 浙江中测新图地理信息技术有限公司 | Rapid extraction method for dense point cloud oriented toward city three-dimensional modeling |
US20170206648A1 (en) * | 2016-01-20 | 2017-07-20 | Ez3D, Llc | System and method for structural inspection and construction estimation using an unmanned aerial vehicle |
CN109115186A (en) * | 2018-09-03 | 2019-01-01 | 山东科技大学 | A kind of 360 ° for vehicle-mounted mobile measuring system can measure full-view image generation method |
CN110021039A (en) * | 2018-11-15 | 2019-07-16 | 山东理工大学 | The multi-angle of view material object surface point cloud data initial registration method of sequence image constraint |
CN110006408A (en) * | 2019-04-17 | 2019-07-12 | 武汉大学 | LiDAR data " cloud control " aviation image photogrammetric survey method |
CN110443837A (en) * | 2019-07-03 | 2019-11-12 | 湖北省电力勘测设计院有限公司 | City airborne laser point cloud and aviation image method for registering and system under a kind of line feature constraint |
Non-Patent Citations (5)
Title |
---|
JIANPINGLI等: "NRLI-UAV: Non-rigid registration of sequential raw laser scans and images for low-cost UAV LiDAR point cloud quality improvement", 《ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING》 * |
LI ZHENG等: "Non-Rigid Vehicle-Borne LiDAR-Assisted Aerotriangulation", 《REMOTE SENSING》 * |
RONGYONG HUANG等: "Registration of Aerial Optical Images with LiDAR Data Using the Closest Point Principle and Collinearity Equations", 《SENSORS》 * |
张帆等: "激光扫描与光学影像数据配准的研究进展", 《测绘通报》 * |
林承达等: "激光点云与数字影像融合的目标细部重建", 《计算机工程与应用》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112268541A (en) * | 2020-10-16 | 2021-01-26 | 中国有色金属长沙勘察设计研究院有限公司 | Three-dimensional space detection method |
CN112268541B (en) * | 2020-10-16 | 2022-04-15 | 中国有色金属长沙勘察设计研究院有限公司 | Three-dimensional space detection method |
CN112465732A (en) * | 2020-11-27 | 2021-03-09 | 武汉大学 | Registration method of vehicle-mounted laser point cloud and sequence panoramic image |
CN115511932A (en) * | 2022-09-29 | 2022-12-23 | 北京银河方圆科技有限公司 | Registration method based on medical image, readable storage medium and electronic device |
CN115511932B (en) * | 2022-09-29 | 2024-02-13 | 北京银河方圆科技有限公司 | Registration method based on medical image, readable storage medium and electronic equipment |
CN116222592A (en) * | 2023-03-03 | 2023-06-06 | 北京数字政通科技股份有限公司 | High-precision map generation method and system based on multi-source data |
CN116222592B (en) * | 2023-03-03 | 2023-09-29 | 北京数字政通科技股份有限公司 | High-precision map generation method and system based on multi-source data |
Also Published As
Publication number | Publication date |
---|---|
CN111457930B (en) | 2021-11-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111457930B (en) | High-precision mapping positioning method by combining vehicle-mounted Lidar and unmanned aerial vehicle | |
CN110570428B (en) | Method and system for dividing building roof sheet from large-scale image dense matching point cloud | |
CN112102458B (en) | Single-lens three-dimensional image reconstruction method based on laser radar point cloud data assistance | |
CN108171131B (en) | Improved MeanShift-based method for extracting Lidar point cloud data road marking line | |
CN108802785A (en) | Vehicle method for self-locating based on High-precision Vector map and monocular vision sensor | |
CN111598823A (en) | Multi-source mobile measurement point cloud data air-ground integrated fusion method and storage medium | |
CN101604018B (en) | Method and system for processing high-definition remote sensing image data | |
CN110006408B (en) | LiDAR data cloud control aerial image photogrammetry method | |
WO2018061010A1 (en) | Point cloud transforming in large-scale urban modelling | |
CN108107462B (en) | RTK and high-speed camera combined traffic sign post attitude monitoring device and method | |
CN110288659B (en) | Depth imaging and information acquisition method based on binocular vision | |
CN114332348B (en) | Track three-dimensional reconstruction method integrating laser radar and image data | |
CN111369436A (en) | Airborne LiDAR point cloud rarefying method considering multi-terrain features | |
CN104182968B (en) | The fuzzy moving-target dividing method of many array optical detection systems of wide baseline | |
CN107679458B (en) | Method for extracting road marking lines in road color laser point cloud based on K-Means | |
CN114565616B (en) | Unstructured road state parameter estimation method and system | |
CN109100719A (en) | Combine plotting method with the topographic map of optical image based on satellite-borne SAR image | |
CN112270698A (en) | Non-rigid geometric registration method based on nearest curved surface | |
CN107492120A (en) | Point cloud registration method | |
CN113947724A (en) | Automatic line icing thickness measuring method based on binocular vision | |
CN114563000B (en) | Indoor and outdoor SLAM method based on improved laser radar odometer | |
CN107610216B (en) | Particle swarm optimization-based multi-view three-dimensional point cloud generation method and applied camera | |
CN112561981A (en) | Photogrammetry point cloud filtering method fusing image information | |
Kong et al. | UAV LiDAR data-based lane-level road network generation for urban scene HD Maps | |
CN116977580A (en) | Method for manufacturing mountain area large scale DEM based on airborne LiDAR |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |