CN113706593A - Vehicle chassis point cloud fusion method suitable for vehicle geometric passing parameter detection - Google Patents

Vehicle chassis point cloud fusion method suitable for vehicle geometric passing parameter detection Download PDF

Info

Publication number
CN113706593A
CN113706593A CN202110997180.5A CN202110997180A CN113706593A CN 113706593 A CN113706593 A CN 113706593A CN 202110997180 A CN202110997180 A CN 202110997180A CN 113706593 A CN113706593 A CN 113706593A
Authority
CN
China
Prior art keywords
point cloud
point
frame
registration
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110997180.5A
Other languages
Chinese (zh)
Other versions
CN113706593B (en
Inventor
贾克斌
陈嘉平
王志举
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202110997180.5A priority Critical patent/CN113706593B/en
Publication of CN113706593A publication Critical patent/CN113706593A/en
Application granted granted Critical
Publication of CN113706593B publication Critical patent/CN113706593B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a vehicle chassis point cloud fusion method suitable for vehicle geometric passing parameter detection, which is used for solving the problem of misregistration caused by a collection mode and environmental characteristics of a vehicle bottom when point cloud frames collected from the vehicle bottom are registered and fused. The method comprises the following steps: and registering the point cloud sequence frame data acquired from the bottom of the vehicle one by one. And obtaining a coarse registration result and using the coarse registration result as a three-dimensional ICP fine registration initial matrix by resolving a paired point pair consisting of the FPFH (fast Fourier transform) characteristics of the sampling points meeting the distribution criterion and the points with similar FPFH characteristics in the cloud frame of the point to be registered. And obtaining an ICP (inductively coupled plasma) fine registration result by resolving the registration point formed by the closest registration point constrained by the prior motion according to the acquisition method, and fusing according to the result. The method has better accuracy and real-time performance for acquiring the overall geometric data of the bottom of the vehicle.

Description

Vehicle chassis point cloud fusion method suitable for vehicle geometric passing parameter detection
Technical Field
The invention relates to the field of automatic detection, in particular to a vehicle chassis point cloud fusion method suitable for vehicle geometric passing parameter detection.
Background
In recent years, with the rapid development of computer vision and laser technology, automated detection technology has also been rapidly developed. The geometric trafficability of the vehicle is mainly related to a vehicle chassis and tires and respectively comprises parameters such as a front overhang, a rear overhang, a minimum underground clearance, a passing angle, an approach angle, a departure angle and the like. In the aspect of geometric passing-through related parameter detection, due to the space limitation and shielding of a vehicle chassis, data acquisition cannot be conveniently carried out by adopting equipment, and the development of automatic geometric passing-through parameter detection of the vehicle is restricted.
The existing detection of geometric passing parameters of the vehicle needs a plurality of operators to hold a measuring tool for manual detection, the minimum ground clearance needs to be drilled into the bottom of the vehicle to find the lowest point, the error and the operation risk are large, the detection of the angle parameters excessively depends on manual experience to select indirect measuring points, and the data consistency is difficult to guarantee.
The laser radar is one of main devices in the current laser three-dimensional scanning field, and point cloud data capable of reflecting geometric information of an object can be acquired through the laser radar device. In the automatic detection scheme of the passability parameters, high-quality point cloud data needs to be acquired firstly, and then the passability parameters are calculated according to the point cloud data. The detection of the passing parameters is mainly calculated by the geometric information of the vehicle chassis and the tires, so that the effective acquisition and accurate fusion of point cloud data are necessary preconditions and important bases for the detection of the passing parameters of the vehicle. The laser radar collects geometrical information of the surrounding environment to generate a point cloud frame, the point cloud frame represents a frame of point cloud, each point is data acquired by a laser detector inside the laser radar, coordinates of the point are coordinates obtained by converting angle information acquired by the rotation of the laser detector at the current moment and distance information acquired by the rotation of the laser detector into a rectangular coordinate system, and a set of the points is called as the point cloud. The point cloud data represents the overall point cloud of the vehicle chassis, and is fused by a point cloud frame sequence related to the vehicle chassis, and the key step of the fusion is the efficient and accurate frame-by-frame registration of the point cloud frame sequence.
The laser radar has three characteristics during point cloud collection of the vehicle bottom environment: firstly, because the vehicle chassis area is small, the effective environmental characteristic points which can be contained in the acquired single-frame point cloud are rare under the limitation of the resolution of the laser radar. In order to effectively collect data at the bottom of the vehicle, the heights of the laser radar and the bearing device of the laser radar entering the vehicle are low, so that the collected ground points account for a large proportion in each frame of point cloud data. And thirdly, the laser radar acquires the point cloud data of 360 degrees around by rotating a plurality of laser detectors, and the relatively flat ground presents a circular ring shape in a single-frame point cloud output by the laser radar equipment. In the collected point cloud frame, the environmental characteristic points are rare, the ground points account for a large proportion and are circular, and two problems are directly caused: firstly, most or all of rough registration sampling based on Euclidean distance is ground points, so that ground point clouds which are not in great relation with a vehicle geometric structure form a main registration basis and become a main error source in the rough registration process. And secondly, when the annular point cloud is subjected to registration calculation, the characteristics such as angular points and the like are not obvious, and the curvature of the point cloud in the annular is similar, so that ICP fine registration is more likely to fall into a local minimum value in an iteration process, and therefore misregistration is generated, serious geometric distortion is brought to the integral point cloud of the vehicle chassis, and the accuracy and precision of vehicle geometric passing parameter detection are influenced.
Disclosure of Invention
In order to effectively collect point clouds capable of detecting geometric passing parameters of a vehicle from the bottom of the vehicle and overcome the geometric distortion caused by mismatching of the point clouds capable of reflecting geometric information of the bottom of the vehicle, the invention provides a vehicle chassis point cloud fusion method suitable for detecting the geometric passing parameters of the vehicle. And carrying a laser radar on the flat ground through a horizontal moving platform, and acquiring a point cloud sequence frame through the bottom of the vehicle along the longitudinal direction of the vehicle to be detected. According to the distribution condition of the vehicle bottom sequence frame point clouds, a random sampling consistency algorithm, FPFH (field programmable gate flash) features and a distribution criterion are adopted to obtain matching point pairs for coarse registration so as to increase the contribution of the point clouds on the vehicle in the coarse registration, and the distribution of the sampling points is improved under the condition that the calculation complexity is not remarkably increased, so that most of the sampling points are distributed on the vehicle, thereby improving the contribution of the detection subject vehicle point clouds in the registration process and reducing the coarse registration error. In the ICP precise registration process, a mis-registration point pair in ICP precise registration is removed by adopting motion constraint according to the motion mode of a flat ground where point cloud collection is located and a horizontal moving platform and encoder data, so that iteration is reduced, and mis-registration caused by the fact that the point cloud falls into a local minimum value is effectively avoided.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a vehicle chassis point cloud fusion method suitable for vehicle geometric passing parameter detection is characterized by comprising the following steps:
1) acquiring a point cloud sequence frame longitudinally through the bottom of a vehicle to be detected along a straight line by laser radar equipment carried by a horizontal moving platform, and respectively taking adjacent point cloud frames in the point cloud sequence frame as a target registration point cloud frame and a point cloud frame to be registered;
2) roughly registering the target registration point cloud frame and the point cloud frame to be registered, specifically, extracting n groups of rough matching point pairs from the target registration point cloud frame and the point cloud frame to be registered, wherein each sampling point corresponds to at most one matching point pair, and the rough matching point pairs comprise sampling points randomly extracted from the target registration point cloud frame and points corresponding to the sampling points in the point cloud frame to be registered;
3) calculating a rough registration transfer matrix according to the n groups of matching point pairs by an SVD method, and transforming the pose of the point cloud in the point cloud frame to be registered according to the rough registration transfer matrix;
4) and selecting an ICP algorithm nearest registration point pair from the target registration point cloud frame and the transformed cloud frame of the point to be registered, removing nearest registration point pairs which do not meet the prior motion constraint, and then solving a transfer matrix according to the remaining nearest registration point pairs by a quaternion method.
5) Transforming the pose of the point cloud in the point cloud frame to be registered according to the transfer matrix, calculating an ICP algorithm error function, outputting a fine registration transfer matrix if the error function meets the ICP algorithm convergence condition, and otherwise, iterating the step 4) and the step 5), wherein the fine registration transfer matrix is obtained by performing cumulative transformation on the rough registration transfer matrix and the transfer matrix in the iteration process;
6) and according to a fine registration transfer matrix obtained by registering adjacent point cloud frames in the point cloud sequence frames, sequentially carrying out pose transformation from the last frame of the time sequence of the point cloud sequence frames and fusing the pose transformation to the previous frame until the pose transformation is fused to the first frame, thereby obtaining the complete point cloud of the vehicle chassis.
The invention has the following beneficial effects: in the coarse registration, the sampled matching points are uniformly distributed in the z direction in space, so that the contribution of the vehicle body point cloud in the registration is increased. And the motion constraint based on the prior knowledge is adopted in the fine registration, so that the misregistration point pairs in the ICP fine registration are removed. The method reduces the distortion caused by misregistration, simultaneously integrally reduces the iteration times in the registration process, improves the precision, ensures that the method has better geometric consistency with a real vehicle, and provides high-quality point cloud data for the geometric passing parameter detection of the vehicle.
Drawings
FIG. 1 is a schematic view of a point cloud acquisition;
FIG. 2 is a method flow diagram;
FIG. 3 is a schematic diagram of a coarse registration sampling point distribution; (a) the distribution schematic diagram of the existing coarse registration sampling points which easily influence the accuracy of the coarse registration; (b) the improved sampling point distribution diagram;
fig. 4 is a cloud point image acquired and fused at the bottom of a vehicle.
Detailed Description
The present invention is further described below.
Referring to fig. 1 and 2, the present invention provides a vehicle chassis point cloud fusion method suitable for vehicle geometric passing parameter detection, comprising the following steps:
the carried laser radar equipment passes through the bottom of the vehicle in a mode shown in figure 1, and the laser radar equipment outputs a point cloud sequence frame according to time sequence. And traversing sequentially according to the time sequence of the point cloud sequence frames, starting from a first frame, taking a current frame as a target registration point cloud frame, taking a next frame as a point cloud frame to be registered, and taking the next frame as an end frame.
In the step 2), as shown in fig. 2, firstly, a random sampling consistency algorithm is adopted to extract points in a target registration point cloud frame as sampling points, then FPFH characteristics of the sampling points are calculated, then the FPFH characteristics of each point in the point cloud frame to be registered are calculated, and the points similar to the FPFH characteristics of the sampling points and the sampling points form sampling point pairs. The FPFH characteristic similarity means that the FPFH characteristics of the two points are the same or meet the condition preset by people. If the distribution criterion of the sampling point pairs is met, the sampling point pairs are used as a matching point pair, if the distribution criterion is not met, the sampling is continued until n groups of matching point pairs are obtained, wherein n is more than or equal to 3, and the 1 st group of sampling point pairs are directly used as the matching point pairs.
In the step 2), the annular ground point cloud occupying a large proportion in the coarse registration is the primary problem influencing the accuracy of the coarse registration result. We adopt a distribution point criterion, improve the distribution of points obtained by sampling under the condition of not increasing the operation amount significantly, and increase the contribution of point cloud on the vehicle body in the course of rough registration, as shown in fig. 3, the defined distribution criterion is:
Figure BDA0003234490190000041
wherein r is an Euclidean distance threshold, L represents the Euclidean distance between the current sampling point in the target registration point cloud frame and the sampling point in each group of matching points which previously meet the distribution criterion, and LzDenotes the component of L on the Z coordinate axis of the world coordinate system, ZmaxAnd zminRespectively representing the maximum value and the minimum value of the point cloud in the target registration point cloud frame on the Z coordinate axis, wherein delta is an averaging empirical parameter.
And 3) calculating a rough registration transfer matrix according to the n groups of matching point pairs by an SVD method, and transforming the pose of the point cloud in the point cloud frame to be registered as a three-dimensional ICP fine registration initial matrix. The method has the function of serving as an initial matrix to firstly carry out pose transformation on the point cloud in the point cloud frame to be registered so as to reduce the number of times of accurate registration iteration of the ICP algorithm.
In the step 4), in the ICP fine registration process, the closest point needs to be selected from the point cloud in the target registration point cloud frame and the point cloud in the point cloud frame to be registered after pose transformation to form a registration point pair. Because the known point cloud collection of the bottom of the vehicle is obtained by the fact that a horizontal moving platform carries a laser radar device to move along a straight line on a relatively flat ground, the wrong nearest registration point pair can be further removed by setting an encoder and empirical parameters of the horizontal moving platform as priori movement constraints according to the priori movement mode, so that the wrong nearest registration point pair does not participate in the ICP fine registration process, and the misregistration caused by the fact that an ICP fine registration algorithm falls into a local minimum value is avoided.
As shown in fig. 1, the prior motion in step 4) is derived from encoder data on four moving wheels of the horizontal moving platform, and the encoder data is represented as u clockwise from the upper left when the moving platform moves in the forward direction directly upward in a plan view1、u2、u3、u4
Defining a prior motion constraint as:
Figure BDA0003234490190000042
wherein (x'i,y′i,z′i) And (x)i,yi,zi) Coordinates of two points in the ith group of most adjacent registration point pairs, sigma u1、∑u2、∑u3、∑u4The method is characterized in that the method respectively comprises the steps of accumulating values of four encoder data in the time difference between a current point cloud frame and a point cloud frame to be registered, wherein alpha is a motion experience parameter, and beta is a static experience parameter.
In the ICP fine registration process, firstly, the nearest registration point pair is selected from the target registration point cloud frame and the transformed cloud frame of the point to be registered, then the nearest registration point pair which does not accord with the prior motion constraint is removed from the nearest registration point pair, and the nearest registration point pair which accords with the prior motion constraint is reserved. And then resolving by a quaternion method to obtain a transfer matrix, and transforming the pose of the point cloud in the point cloud frame to be registered according to the transfer matrix. And finally, calculating an ICP algorithm error function, outputting a fine registration transfer matrix if the ICP convergence condition is met, and otherwise, iterating the ICP fine registration process until the ICP convergence condition is met. The fine registration transfer matrix is obtained by accumulative transformation of the coarse registration transfer matrix and the transfer matrix in the iterative process, is the final result of registration between the target registration frame and the frame to be registered, namely the attitude and position information, and is accumulated by the coarse registration transfer matrix and the fine registration transfer matrix. In the fusion process, a coordinate system of any frame can be selected, and transformation and accumulation are performed under the coordinate system according to pose information obtained by registration between other point cloud frames and the frame. In this embodiment, pose transformation is sequentially performed from the last frame of the point cloud sequence frame time sequence according to the result of sequentially registering the point cloud sequence frames, and the pose transformation is fused to the previous frame until the pose transformation is fused to the first frame, so that complete point cloud of the vehicle chassis can be obtained.
In fig. 4, the result of the point cloud collection and fusion experiment of the medium-sized truck is shown, and the effective number and real-time performance of each frame of point cloud are integrated, wherein the value of the matching point group n is 5, and the value of the equipartition experience parameter δ is 0.6 according to the height of the detected vehicle chassis from the ground and the number of sampling points. According to the speed parameter of the vehicle, the motion experience parameter alpha is set to be 0.005, and the static experience parameter beta is set to be 0.015. Referring to fig. 4, it can be seen that the consistency of the chassis edge and the tire of the critical area of the vehicle point cloud for geometric passing parameter detection and the geometric structure of the real vehicle is good, and a good data base is provided for the geometric passing parameter detection of the vehicle.
The above detailed description of the preferred embodiments of the present invention is not intended to limit the scope of the present invention. Variations may also be made within the knowledge of a person of ordinary skill in the art without departing from the invention in its general form.

Claims (5)

1. A vehicle chassis point cloud fusion method suitable for vehicle geometric passing parameter detection is characterized by comprising the following steps:
1) acquiring a point cloud sequence frame longitudinally through the bottom of a vehicle to be detected along a straight line by laser radar equipment carried by a horizontal moving platform, and respectively taking adjacent point cloud frames in the point cloud sequence frame as a target registration point cloud frame and a point cloud frame to be registered;
2) roughly registering the target registration point cloud frame and the point cloud frame to be registered, specifically, extracting n groups of roughly matching point pairs from the target registration point cloud frame and the point cloud frame to be registered, wherein the roughly matching point pairs comprise sampling points randomly extracted from the target registration point cloud frame and points corresponding to the sampling points in the point cloud frame to be registered, and each sampling point corresponds to one matching point pair at most;
3) calculating a rough registration transfer matrix according to the n groups of matching point pairs by an SVD method, and transforming the pose of the point cloud in the point cloud frame to be registered according to the rough registration transfer matrix;
4) and selecting an ICP algorithm nearest registration point pair from the target registration point cloud frame and the transformed cloud frame of the point to be registered, removing nearest registration point pairs which do not meet the prior motion constraint, and then solving a transfer matrix according to the remaining nearest registration point pairs by a quaternion method.
5) Transforming the pose of the point cloud in the point cloud frame to be registered according to the transfer matrix, calculating an ICP algorithm error function, outputting a fine registration transfer matrix if the error function meets the ICP algorithm convergence condition, and otherwise, iterating the step 4) and the step 5), wherein the fine registration transfer matrix is obtained by performing cumulative transformation on the rough registration transfer matrix and the transfer matrix in the iteration process;
6) and according to a fine registration transfer matrix obtained by registering adjacent point cloud frames in the point cloud sequence frames, sequentially carrying out pose transformation from the last frame of the time sequence of the point cloud sequence frames and fusing the pose transformation to the previous frame until the pose transformation is fused to the first frame, thereby obtaining the complete point cloud of the vehicle chassis.
2. The vehicle chassis point cloud fusion method suitable for vehicle geometric passing parameter detection according to claim 1, wherein the determination method of the coarse matching point pair in step 2) is as follows:
firstly, randomly extracting sampling points in a target registration point cloud frame through a random sampling consistency algorithm and calculating the FPFH (field programmable gate flash) characteristics of the sampling points;
and then, calculating the FPFH (field programmable gate flash) characteristics of each point in the point cloud frame to be registered, forming a sampling point pair by using points similar to the FPFH characteristics of the sampling points in the target registration point cloud frame, and if the sampling point pair meets the distribution criterion, calling the sampling point pair as a group of matching point pairs, and repeating the steps until n groups of matching point pairs meeting the distribution criterion are found.
3. The vehicle chassis point cloud fusion method suitable for vehicle geometric passing parameter detection according to claim 2, wherein the distribution criterion is:
Figure FDA0003234490180000021
wherein r is an Euclidean distance threshold, L represents the Euclidean distance between the current sampling point in the target registration point cloud frame and the sampling point in each group of matching points which previously meet the distribution criterion, and LzDenotes the component of L on the Z coordinate axis of the world coordinate system, ZmaxAnd zminRespectively representing the maximum value and the minimum value of the point cloud in the target registration point cloud frame on the Z coordinate axis, wherein delta is an averaging empirical parameter.
4. The vehicle chassis point cloud fusion method suitable for vehicle geometric passing parameter detection according to claim 1, wherein the prior motion in step 4) is derived from encoder data on four moving wheels of the horizontal moving platform, and the encoder data is recorded as u clockwise from the upper left when the moving platform advances in the right upper direction in a top view1、u2、u3、u4
5. The vehicle chassis point cloud fusion method suitable for vehicle geometric passing parameter detection according to claim 4, wherein the prior motion constraint in step 4) is:
Figure FDA0003234490180000022
wherein (x'i,y′i,z′i) And (x)i,yi,zi) Coordinates of two points in the ith group of most adjacent registration point pairs, sigma u1、∑u2、∑u3、∑u4Are respectively four weavesAnd accumulating the encoder data in the time difference between the current point cloud frame and the cloud frame to be registered, wherein alpha is a motion experience parameter, and beta is a static experience parameter.
CN202110997180.5A 2021-08-27 2021-08-27 Vehicle chassis point cloud fusion method suitable for vehicle geometric passing parameter detection Active CN113706593B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110997180.5A CN113706593B (en) 2021-08-27 2021-08-27 Vehicle chassis point cloud fusion method suitable for vehicle geometric passing parameter detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110997180.5A CN113706593B (en) 2021-08-27 2021-08-27 Vehicle chassis point cloud fusion method suitable for vehicle geometric passing parameter detection

Publications (2)

Publication Number Publication Date
CN113706593A true CN113706593A (en) 2021-11-26
CN113706593B CN113706593B (en) 2024-03-08

Family

ID=78656125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110997180.5A Active CN113706593B (en) 2021-08-27 2021-08-27 Vehicle chassis point cloud fusion method suitable for vehicle geometric passing parameter detection

Country Status (1)

Country Link
CN (1) CN113706593B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114719768A (en) * 2022-03-31 2022-07-08 东风汽车集团股份有限公司 Method for measuring minimum ground clearance of vehicle
CN114842150A (en) * 2022-05-20 2022-08-02 北京理工大学 Digital vehicle point cloud model construction method and system fusing pattern information
CN115359131A (en) * 2022-10-20 2022-11-18 北京格镭信息科技有限公司 Calibration verification method, device, system, electronic equipment and storage medium
CN117635896A (en) * 2024-01-24 2024-03-01 吉林大学 Point cloud splicing method based on automobile body point cloud motion prediction

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780459A (en) * 2016-12-12 2017-05-31 华中科技大学 A kind of three dimensional point cloud autoegistration method
CN110889243A (en) * 2019-12-20 2020-03-17 南京航空航天大学 Aircraft fuel tank three-dimensional reconstruction method and detection method based on depth camera
CN111381248A (en) * 2020-03-23 2020-07-07 湖南大学 Obstacle detection method and system considering vehicle bump

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780459A (en) * 2016-12-12 2017-05-31 华中科技大学 A kind of three dimensional point cloud autoegistration method
CN110889243A (en) * 2019-12-20 2020-03-17 南京航空航天大学 Aircraft fuel tank three-dimensional reconstruction method and detection method based on depth camera
CN111381248A (en) * 2020-03-23 2020-07-07 湖南大学 Obstacle detection method and system considering vehicle bump

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114719768A (en) * 2022-03-31 2022-07-08 东风汽车集团股份有限公司 Method for measuring minimum ground clearance of vehicle
CN114719768B (en) * 2022-03-31 2023-12-29 东风汽车集团股份有限公司 Method for measuring minimum ground clearance of vehicle
CN114842150A (en) * 2022-05-20 2022-08-02 北京理工大学 Digital vehicle point cloud model construction method and system fusing pattern information
CN115359131A (en) * 2022-10-20 2022-11-18 北京格镭信息科技有限公司 Calibration verification method, device, system, electronic equipment and storage medium
CN115359131B (en) * 2022-10-20 2023-02-03 北京格镭信息科技有限公司 Calibration verification method, device, system, electronic equipment and storage medium
CN117635896A (en) * 2024-01-24 2024-03-01 吉林大学 Point cloud splicing method based on automobile body point cloud motion prediction
CN117635896B (en) * 2024-01-24 2024-04-05 吉林大学 Point cloud splicing method based on automobile body point cloud motion prediction

Also Published As

Publication number Publication date
CN113706593B (en) 2024-03-08

Similar Documents

Publication Publication Date Title
CN113706593A (en) Vehicle chassis point cloud fusion method suitable for vehicle geometric passing parameter detection
CN110223348B (en) Robot scene self-adaptive pose estimation method based on RGB-D camera
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN104484648B (en) Robot variable visual angle obstacle detection method based on outline identification
CN106709950B (en) Binocular vision-based inspection robot obstacle crossing wire positioning method
CN102411778B (en) Automatic registration method of airborne laser point cloud and aerial image
CN107481274B (en) Robust reconstruction method of three-dimensional crop point cloud
CN112560747B (en) Lane boundary interactive extraction method based on vehicle-mounted point cloud data
CN107862319B (en) Heterogeneous high-light optical image matching error eliminating method based on neighborhood voting
KR102363719B1 (en) Lane extraction method using projection transformation of 3D point cloud map
CN106886988B (en) Linear target detection method and system based on unmanned aerial vehicle remote sensing
US20220398856A1 (en) Method for reconstruction of a feature in an environmental scene of a road
CN111783722B (en) Lane line extraction method of laser point cloud and electronic equipment
Mousa et al. New DTM extraction approach from airborne images derived DSM
CN116563377A (en) Mars rock measurement method based on hemispherical projection model
CN113221883A (en) Real-time correction method for flight navigation route of unmanned aerial vehicle
CN113947724A (en) Automatic line icing thickness measuring method based on binocular vision
CN117115390A (en) Three-dimensional model layout method of power transformation equipment in transformer substation
CN112241964A (en) Light strip center extraction method for line structured light non-contact measurement
CN106709432B (en) Human head detection counting method based on binocular stereo vision
CN111145198A (en) Non-cooperative target motion estimation method based on rapid corner detection
CN113436262A (en) Vision-based vehicle target position and attitude angle detection method
CN112231848B (en) Method and system for constructing vehicle spraying model
CN111198563B (en) Terrain identification method and system for dynamic motion of foot type robot
CN112767459A (en) Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant