CN110728753A - Target point cloud 3D bounding box fitting method based on linear fitting - Google Patents

Target point cloud 3D bounding box fitting method based on linear fitting Download PDF

Info

Publication number
CN110728753A
CN110728753A CN201910954492.0A CN201910954492A CN110728753A CN 110728753 A CN110728753 A CN 110728753A CN 201910954492 A CN201910954492 A CN 201910954492A CN 110728753 A CN110728753 A CN 110728753A
Authority
CN
China
Prior art keywords
plane
points
point cloud
fitting
bounding box
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910954492.0A
Other languages
Chinese (zh)
Other versions
CN110728753B (en
Inventor
李智勇
易子越
伍轶强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN201910954492.0A priority Critical patent/CN110728753B/en
Publication of CN110728753A publication Critical patent/CN110728753A/en
Application granted granted Critical
Publication of CN110728753B publication Critical patent/CN110728753B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a target point cloud 3D bounding box fitting method based on linear fitting, which comprises the following steps: 1) filtering ground points; 2) plane detection and segmentation; 3) optimizing a reference surface; 4) performing linear fitting; 5)3D bounding box establishment. The method firstly eliminates the ground points in the target point cloud, avoids the influence of the ground points on the final fitting result, and enables the fitting result to be more accurate. According to the method, the 3D frame length which is very similar to the length of the vehicle body is obtained by utilizing the characteristic that the projection of a laser point obtained from the side surface of the vehicle body on the XYO plane is approximate to a straight line, then the rectangular bottom surface of the 3D frame is obtained according to the mutually vertical relation of the adjacent surfaces of the vehicle, and finally the 3D frame is obtained according to the height of the vehicle body. The calculation method is simple, the fitted 3D boundary frame is very close to the real size of the vehicle body, the accuracy is higher, and the method has a good application prospect in the field of automatic driving.

Description

Target point cloud 3D bounding box fitting method based on linear fitting
Technical Field
The invention particularly relates to a target point cloud 3D bounding box fitting method based on linear fitting.
Background
Automatic driving is a field of very hot fire at present, multi-modal perception fusion is a very important part for a set of automatic driving systems, perception data of various sensors are fused, and target detection is carried out in a three-dimensional space, so that real and reliable reasonable expression of the surrounding environment of a vehicle is provided for a planning module. After point clouds belonging to a vehicle are segmented by using vision and laser radar, how to accurately find a 3D boundary frame of the vehicle is a difficult point. Due to the characteristics of the laser radar, rays emitted by the laser radar are collected by the laser radar after being reflected by the obstacle, so that the shielding phenomenon is inevitably generated. Therefore, vehicle point clouds collected by the laser radar are incomplete, and the key point of the problem lies in how to extract a real 3D boundary box from the limited vehicle point clouds.
Currently, there are many methods for 3D frame fitting based on a target point cloud:
a. and projecting the target point cloud to 2 dimensions, and obtaining the length, width and other poses of the target vehicle by calculating the minimum circumscribed rectangle containing all the points so as to fit the 3D frame. Influenced by the characteristics of the laser radar, the vehicle point cloud after being projected to two dimensions is always in an 'L' shape, so that the calculated minimum circumscribed rectangle has larger deviation with a real boundary, and the specific flow is shown in fig. 1.
b. Projecting the target point cloud to 2 dimensions, screening a convex surrounding point set, finding a connecting line of two extreme points of the obstacle which can be detected by the laser radar through angle calculation, selecting a plurality of edges of one side of the connecting line close to the radar as candidates, respectively calculating the 3DBox, and finally selecting the smallest Box area as the obstacle prediction Box. The method essentially fits by minimizing the projection area of the bounding box, and only screens candidate edges so as to reduce certain calculation amount. Therefore, the method still has larger deviation between the external rectangle and the real boundary when the method is used for dealing with the L-shaped point cloud.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a target point cloud 3D bounding box fitting method based on linear fitting.
A target point cloud 3D bounding box fitting method based on linear fitting comprises the following steps:
1) detecting whether residual ground points exist in the obtained target point cloud, if so, removing the residual ground points from the target point cloud according to the characteristics of the ground points to obtain a modified target point cloud;
2) performing plane detection on the target point cloud modified in the step 1) by using RANSAC (random sample consensus), and then dividing the detected plane into a plurality of sub-planes;
3) respectively extracting the characteristics of each sub-plane according to all the sub-planes obtained in the step 2), calculating the score of each sub-plane according to the plane characteristics, sorting the scores according to the scores, selecting the sub-plane with the highest score, namely the sub-plane which best meets the side characteristics of the vehicle, as a reference plane, and segmenting the reference plane from the target point cloud to obtain a reference plane point set W1While recording the remaining n (n)>1) the point set corresponding to the sub-planes is W2
4) Collecting W of the reference surface points obtained in the step 3)1Reference surface projection point set W 'obtained by projecting to XYO plane'1Performing linear fitting to obtain a linear fitting result straight line L of the reference surface; and evaluating L, if the L is qualified, continuing to the step 5), and if the L is not qualified, removing the current reference plane from the candidate plane set omega and returning to the step 3).
5) Calculate W'1Obtaining a reference surface projection point set W 'in the coordinate range of the straight line L direction in the step 4)'1Two-dimensional end points a and B (a closer to the origin of coordinates) on the corresponding straight line L; then set the points W2W 'of point set projected to XYO plane'2And calculating W'2Obtaining the maximum vertical distance from the top surface of the 3D bounding box to the straight line L, and then respectively obtaining corresponding points D and C of two-dimensional endpoints A and B according to the maximum vertical distance, wherein A, B, C, D are 4 vertexes of the rectangular projection of the top surface of the 3D bounding box on the XYO plane;
6) obtaining a height coordinate range of the 3D bounding box according to a coordinate range of the target point cloud on a Z axis, and respectively obtaining 4 top points A1, B1, C1 and D1 of the top surface and 4 bottom points A2, B2, C2 and D2 of the bottom surface of the 3D bounding box by combining 4 top points of the rectangular projection of the top surface of the 3D bounding box in the step 5) on an XYO plane; and obtaining the 3D boundary frame of the vehicle according to the 8 vertexes of the boundary frame.
In the step 1), the judgment of the characteristics of the residual ground points from the target cloud point picture is as follows: are all distributed at the bottommost part of the whole target point cloud; the ground points are eliminated according to the distribution fluctuation of the bottom points in the z-axis direction, which is much smaller than the fluctuation of the bottom points in the x-axis and the y-axis.
In the step 3), the concrete steps are as follows:
3.1 firstly, determining the point cloud characteristics of the side surface of the vehicle, which is characterized in that: the laser points are distributed along the vertical direction, namely the variance D (Z) in the z-axis direction is larger; since the projection of the side surface of the vehicle body on the XYO plane is approximate to a straight line, the variance of the projection in the x and y directions is in a form of one large and one small; in order to deal with the situation that the variances D (X) and D (Y) of the projections along the X-axis direction and the Y-axis direction are relatively close, only the largest one of D (X) and D (Y) is selected for calculation; in summary, the variance in the X-or Y-axis direction, the variance in the Z-axis direction, and the number of points constituting the reference plane are given a weight λiAnd adding the reference plane scores to obtain a calculation formula of the reference plane score P, which is shown in formula (1):
Figure BDA0002226820900000031
wherein: omegaiThe ith sub-plane obtained by dividing the point cloud in the step 2, SiTo form omegaiThe number of laser spots;
3.2 substituting the sub-planes in the step 2) into the formula (1) respectively, calculating the scores of the sub-planes, and selecting the sub-plane with the highest score as a reference plane;
3.3 dividing the target cloud points contained in the reference surface to obtain a reference surface point set W1
In the step 4), the specific steps are as follows: projecting the reference surface to XYO plane to obtain a projected point set W'1(xi,yi) From the car bodyThe characteristics of the side surfaces can be known to satisfy a certain linear relation y ═ kx + b, so the fitting is carried out by using a least square method, wherein the parameters k and b are parameters to be determined; therefore it is necessary to find a set of k and b such as W'1All points in (a) satisfy yi=kxi+ b, but obviously for W 'not all in a straight line'1One set of k and b cannot be found so that the left and right sides of the above formula are equal, so that y can only be reduced as much as possibleiAnd kxiA difference of + b; where p is taken to denote a set of k and b, then f (x)i,p)=kxi+ b. The goal is to find a set of p that minimizes the value of the function S in equation (2); if and only if S is minimum, the linear relation y corresponding to p is kx + b which is the best solution of the fitting point set; applying a least square method to a projection point set of the reference surface on the XYO plane, and calculating to obtain a fitting result straight line L;
Figure BDA0002226820900000032
in the step 5), after two adjacent side surfaces of the vehicle are in a mutually perpendicular relationship and two end points on the straight line L are obtained, the maximum perpendicular distance between the projection point of the target cloud point on the XYO plane and the straight line L is calculated, namely the length of the adjacent vertical plane, then the symmetrical points of the two end points can be obtained according to the length of the vertical plane, and 4 top points projected on the XYO plane rectangle are obtained according to 4 points.
The invention has the beneficial effects that: 1) the method firstly eliminates the ground points in the target point cloud, avoids the influence of the ground points on the final fitting result, and enables the fitting result to be more accurate. 2) According to the method, the 3D frame length which is very similar to the length of the vehicle body is obtained by utilizing the characteristic that the projection of a laser point obtained from the side surface of the vehicle body on the XYO plane is approximate to a straight line, then the rectangular bottom surface of the 3D frame is obtained according to the mutually vertical relation of the adjacent surfaces of the vehicle, and finally the 3D frame is obtained according to the height of the vehicle body. 3) The calculation method is simple, the fitted 3D boundary frame is very close to the real size of the vehicle body, the accuracy is higher, and the method has a good application prospect in the field of automatic driving.
Drawings
FIG. 1 is a schematic diagram of a fit of scheme a in the background art;
FIG. 2 is a flow chart of example 1 of the present invention;
FIG. 3 is a cloud of target points obtained in example 1;
fig. 4 is a cloud image of target points with ground points removed in embodiment 1;
FIG. 5 is a linear fit graph in example 1;
FIG. 6 is a graph of the results of the reference plane linear fit in example 1;
FIG. 7 3D bounding box after completion of the fit in example 1;
fig. 8 is a diagram of the effect of the method of the present invention in practice.
Detailed Description
Example 1
The flow of this embodiment is shown in fig. 2:
1) ground point filtering:
the method comprises the steps of obtaining a target point cloud picture, as shown in fig. 3, wherein points similar to the points indicated by arrows on the picture are called ground points, and it can be seen from the picture that the ground points are basically distributed at the bottommost part of the whole target point cloud, the distribution fluctuation of the ground points in the z-axis direction is very small and is far smaller than the fluctuation of the ground points in the x-axis and the y-axis directions, according to the characteristics, the ground points in the target point cloud are removed, and the target point cloud picture with the ground points removed is shown in fig. 4. In the step, the point cloud is subjected to ground removing processing, so that the influence of the ground which does not belong to the target on the subsequent fitting step can be effectively avoided.
2) Plane detection and segmentation:
performing plane detection on a target point cloud picture without ground points by using RANSAC (random sample consensus) and dividing the target point cloud picture into a plurality of sub-planes, wherein a distance threshold value is set to be 0.08, and the divided ith sub-plane is named as omegai
3) Optimizing a reference surface:
because the glass material can not reflect laser, the point cloud above the common car window is less, and abundant points are distributed on the car door, the car tail and the car head, so that the side surface of the car body is expected to be found as a reference surface. By using the characteristics of the point cloud on the side of the vehicle body, the laser points are distributed along the vertical direction, namely, the variance D (Z) in the z-axis direction is large. Since the projection of the side surface of the vehicle body on the XYO plane is approximately a straight line, the variance of the projection in the x and y directions is in the form of one large and one small. In order to deal with the situation that the variances D (X) and D (Y) of the projections along the X-axis and the Y-axis are relatively close, only the largest one of D (X) and D (Y) is selected for calculation. In addition, the greater the number of points constituting the reference surface, the better the effect of the linear fit. In summary, the variance in the X-or Y-axis direction, the variance in the Z-axis direction, and the number of points constituting the reference plane are given a weight λiAnd adding to obtain a calculation formula P of the reference surface score, as shown in formula (1):
Figure BDA0002226820900000051
wherein: omegaiThe ith sub-plane obtained by segmenting the point cloud in the step 2), SiTo form omegaiThe number of laser spots.
And (3) respectively substituting the sub-planes in the step 2) into the formula (1), calculating the scores of the sub-planes, and selecting the sub-plane with the highest score as a reference plane.
Dividing target cloud points contained in the reference surface to obtain a reference surface point set W1(ii) a Simultaneously recording the rest n (n)>1) the point set corresponding to the sub-planes is W2
4) Linear fitting:
projecting the reference surface to XYO plane to obtain a projected point set W'1(xi,yi) The characteristics of the side surface of the vehicle body can know that the side surface of the vehicle body satisfies a certain linear relation y which is kx + b, so that a least square method is used for fitting, wherein the parameters k and b are parameters to be determined; so we wish to find a set of k and b such as W'1All points in (a) satisfy yi=kxi+ b, but obviously for W 'not all in a straight line'1One set of k and b cannot be found so that the left and right sides of the above formula are equal, so that y can only be reduced as much as possibleiAnd kxiThe difference of + b. If p represents a set of k and b, then f (x)i,p)=kxi+ b. The goal is to find a set of p that minimizes the value of the function S in equation (2). And if and only if S is minimum, the linear relation y corresponding to p is kx + b which is the best solution of the fitting point set.
Applying a least square method to a projection point set of the reference surface on the XYO plane, and calculating to obtain a fitting result straight line L; and (3) evaluating the fitting result of the obtained fitting straight line L, entering the next step after the fitting straight line L is qualified, and removing the current reference plane from the candidate plane set omega and returning to the step 3) if the fitting straight line L is not qualified.
5) Establishment of 3D bounding box:
two-dimensional end points A and B (shown in figure 6) are obtained by calculating the coordinate range of a reference surface in the L direction, two adjacent side surfaces of a vehicle can be known to be perpendicular to each other through priori knowledge, then the maximum perpendicular distance between the projection point of a target point cloud on the XYO plane and a straight line L is calculated, namely the length of a vertical plane, according to the length of the vertical plane, a symmetrical point D of the end point A can be obtained (the connecting line of the A and the D is perpendicular to the straight line L, the length is equal to the maximum perpendicular distance, and is correct), a symmetrical point C of the B (the connecting line of the B and the C is perpendicular to the straight line L, and the length is equal to the maximum perpendicular distance), four points A, B, C, D are used as vertexes (shown in figure 7), and the projection rectangle of the top surface of the 3.
And (3) obtaining a height coordinate range of the 3D bounding box according to a coordinate range of the target point cloud on the Z axis, and respectively obtaining 4 top points A1, B1, C1 and D1 of the top surface and 4 bottom points A2, B2, C2 and D2 of the bottom surface of the 3D bounding box by combining 4 top points of the rectangular projection of the top surface of the 3D bounding box in the step 5) on the XYO plane. The 3D bounding box of the vehicle can be derived from the 8 vertices of the bounding box (as shown in fig. 7).
The method of the present invention is used for fitting a 3D bounding box to a parked vehicle on a street, and the result is shown in fig. 8, and it can be seen from fig. 8 that the size and orientation of the 3D bounding box obtained by the fitting method of the present invention are also changed according to the size of the vehicle body and the parking mode, and the actual size of the 3D bounding box obtained by fitting is very close to the actual size of the vehicle body, which shows that the fitting accuracy of the 3D bounding box of the present invention is very high.

Claims (5)

1. A target point cloud 3D bounding box fitting method based on linear fitting comprises the following steps:
1) detecting whether residual ground points exist in the obtained target point cloud, if so, removing the residual ground points from the target point cloud according to the characteristics of the ground points to obtain a modified target point cloud;
2) carrying out plane detection on the target point cloud modified in the step 1) by using random sampling consistency, and then dividing the detected plane into a plurality of sub-planes;
3) respectively extracting the characteristics of each sub-plane according to all the sub-planes obtained in the step 2), calculating the score of each sub-plane according to the plane characteristics, sorting the scores according to the scores, selecting the sub-plane with the highest score, namely the sub-plane which best meets the side characteristics of the vehicle, as a reference plane, and segmenting the reference plane from the target point cloud to obtain a reference plane point set W1Simultaneously recording the point sets corresponding to the remaining n sub-planes as W2
4) Collecting W of the reference surface points obtained in the step 3)1Reference surface projection point set W 'obtained by projecting to XYO plane'1Performing linear fitting to obtain a linear fitting result straight line L of the reference surface; evaluating L, if the L is qualified, continuing to step 5), and if the L is not qualified, removing the current reference plane from the candidate plane set omega and returning to step 3);
5) calculate W'1Obtaining a reference surface projection point set W 'in the coordinate range of the straight line L direction in the step 4)'1Two-dimensional end points A and B on the corresponding straight line L; then set the points W2W 'of point set projected to XYO plane'2And calculating W'2Obtaining the maximum vertical distance from the top surface of the 3D bounding box to the straight line L, and then respectively obtaining corresponding points D and C of two-dimensional endpoints A and B according to the maximum vertical distance, wherein A, B, C, D are 4 vertexes of the rectangular projection of the top surface of the 3D bounding box on the XYO plane;
6) obtaining a height coordinate range of the 3D bounding box according to a coordinate range of the target point cloud on the Z axis, combining 4 vertexes of the top surface rectangle of the 3D bounding box in the step 5) projected on the XYO plane, respectively obtaining 4 vertexes A1, B1, C1 and D1 of the top surface and 4 vertexes A2, B2, C2 and D2 of the bottom surface of the 3D bounding box, and obtaining the 3D bounding box of the vehicle according to 8 vertexes of the bounding box.
2. The method for fitting the target point cloud 3D bounding box based on linear fitting according to claim 1, wherein in the step 1), the judgment of the residual ground point features from the target cloud point picture in the step 1) is: are all distributed at the bottommost part of the whole target point cloud; the ground points are eliminated according to the distribution fluctuation of the bottom points in the z-axis direction, which is much smaller than the fluctuation of the bottom points in the x-axis and the y-axis.
3. The target point cloud 3D bounding box fitting method based on linear fitting of claim 1, wherein in the step 3), the specific steps are as follows:
3.1 firstly, determining the point cloud characteristics of the side surface of the vehicle, which is characterized in that: the laser points are distributed along the vertical direction, namely the variance D (Z) in the z-axis direction is larger; since the projection of the side surface of the vehicle body on the XYO plane is approximate to a straight line, the variance of the projection in the x and y directions is in a form of one large and one small; in order to deal with the situation that the variances D (X) and D (Y) of the projections along the X-axis direction and the Y-axis direction are relatively close, only the largest one of D (X) and D (Y) is selected for calculation; in summary, the variance in the X-or Y-axis direction, the variance in the Z-axis direction, and the number of points constituting the reference plane are given a weight λiAnd adding the reference plane scores to obtain a calculation formula of the reference plane score P, which is shown in formula (1):
Figure FDA0002226820890000021
wherein: omegaiThe ith sub-plane obtained by dividing the point cloud in the step 2, SiTo form omegaiThe number of laser spots;
3.2 substituting the sub-planes in the step 2) into the formula (1) respectively, calculating the scores of the sub-planes, and selecting the sub-plane with the highest score as a reference plane;
3.3 dividing the target cloud points contained in the reference surface to obtain a reference surface point set W1
4. The target point cloud 3D bounding box fitting method based on linear fitting of claim 1, wherein in the step 4), the specific steps are as follows: projecting the reference surface to XYO plane to obtain projection point set W'1 (x)i,yi) The characteristics of the side surface of the vehicle body can know that the side surface of the vehicle body satisfies a certain linear relation y which is kx + b, so that a least square method is used for fitting, wherein the parameters k and b are parameters to be determined; therefore, it is necessary to find a set of k and b such that all points in W'1 satisfy yi=kxi+ b, but obviously for W '1's not all in a straight line, no one set of k and b can be found so that the left and right sides of the above formula are equal, so y can only be reduced as much as possibleiAnd kxiA difference of + b; where p is taken to denote a set of k and b, then f (x)i,p)=kxi+ b; the goal is to find a set of p that minimizes the value of the function S in equation (2); if and only if S is minimum, the linear relation y corresponding to p is kx + b which is the best solution of the fitting point set; applying a least square method to a projection point set of the reference surface on the XYO plane, and calculating to obtain a fitting result straight line L;
Figure FDA0002226820890000022
5. the method as claimed in claim 1, wherein in the step 5), after obtaining two end points on the straight line L and the relationship that two adjacent sides of the vehicle are perpendicular to each other, the maximum perpendicular distance between the projection point of the target cloud point on the XYO plane and the straight line L is calculated, that is, the length of the adjacent vertical plane, and then the symmetric point of the two end points can be obtained according to the length of the vertical plane, and 4 vertices of the rectangle projected on the XYO plane can be obtained according to 4 points.
CN201910954492.0A 2019-10-09 2019-10-09 Target point cloud 3D bounding box fitting method based on linear fitting Active CN110728753B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910954492.0A CN110728753B (en) 2019-10-09 2019-10-09 Target point cloud 3D bounding box fitting method based on linear fitting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910954492.0A CN110728753B (en) 2019-10-09 2019-10-09 Target point cloud 3D bounding box fitting method based on linear fitting

Publications (2)

Publication Number Publication Date
CN110728753A true CN110728753A (en) 2020-01-24
CN110728753B CN110728753B (en) 2022-04-15

Family

ID=69219810

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910954492.0A Active CN110728753B (en) 2019-10-09 2019-10-09 Target point cloud 3D bounding box fitting method based on linear fitting

Country Status (1)

Country Link
CN (1) CN110728753B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476902A (en) * 2020-04-27 2020-07-31 北京小马慧行科技有限公司 Method and device for labeling object in 3D point cloud, storage medium and processor
CN111507341A (en) * 2020-04-20 2020-08-07 广州文远知行科技有限公司 Method, device and equipment for adjusting target bounding box and storage medium
CN112099025A (en) * 2020-08-20 2020-12-18 杭州飞步科技有限公司 Method, device and equipment for positioning vehicle under bridge crane and storage medium
CN112598061A (en) * 2020-12-23 2021-04-02 中铁工程装备集团有限公司 Tunnel surrounding rock clustering and grading method
CN112884823A (en) * 2021-01-13 2021-06-01 上海建工四建集团有限公司 Running rule simulation point cloud selection and calculation method for actual measurement application
CN113469887A (en) * 2021-09-02 2021-10-01 深圳市信润富联数字科技有限公司 Object digital-to-analog conversion method, device, equipment and storage medium
CN114399550A (en) * 2022-01-18 2022-04-26 中冶赛迪重庆信息技术有限公司 Automobile saddle extraction method and system based on three-dimensional laser scanning
CN115170648A (en) * 2022-06-29 2022-10-11 广西柳工机械股份有限公司 Carriage pose determining method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016073108A1 (en) * 2014-11-06 2016-05-12 Symbol Technologies, Llc Non-parametric method of and system for estimating dimensions of objects of arbitrary shape
CN108280852A (en) * 2018-01-16 2018-07-13 常景测量科技(武汉)有限公司 A kind of door and window point cloud shape detecting method and system based on laser point cloud data
CN108709513A (en) * 2018-04-10 2018-10-26 深圳市唯特视科技有限公司 A kind of three-dimensional vehicle detection method based on model-fitting algorithms
US20180322698A1 (en) * 2016-08-22 2018-11-08 Pointivo, Inc. Methods and systems for wireframes of a structure or element of interest and wireframes generated therefrom
US20190266418A1 (en) * 2018-02-27 2019-08-29 Nvidia Corporation Real-time detection of lanes and boundaries by autonomous vehicles
US20190266741A1 (en) * 2018-02-23 2019-08-29 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for object detection using edge characteristics
CN110246159A (en) * 2019-06-14 2019-09-17 湖南大学 The 3D target motion analysis method of view-based access control model and radar information fusion

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016073108A1 (en) * 2014-11-06 2016-05-12 Symbol Technologies, Llc Non-parametric method of and system for estimating dimensions of objects of arbitrary shape
US20180322698A1 (en) * 2016-08-22 2018-11-08 Pointivo, Inc. Methods and systems for wireframes of a structure or element of interest and wireframes generated therefrom
CN108280852A (en) * 2018-01-16 2018-07-13 常景测量科技(武汉)有限公司 A kind of door and window point cloud shape detecting method and system based on laser point cloud data
US20190266741A1 (en) * 2018-02-23 2019-08-29 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for object detection using edge characteristics
US20190266418A1 (en) * 2018-02-27 2019-08-29 Nvidia Corporation Real-time detection of lanes and boundaries by autonomous vehicles
CN108709513A (en) * 2018-04-10 2018-10-26 深圳市唯特视科技有限公司 A kind of three-dimensional vehicle detection method based on model-fitting algorithms
CN110246159A (en) * 2019-06-14 2019-09-17 湖南大学 The 3D target motion analysis method of view-based access control model and radar information fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MICHAEL KUSENBACH,ETC: "A new geometric 3d lidar feature for model creation and classification of moving objects", 《2016IEEE INTELLIGENT VEHICLES SYMPOSIUM(IV)》 *
王森援: "基于高分辨率三维点云的建筑物表面重建关键技术研究", 《中国优秀博硕士学位论文全文数据库(硕士) 工程科技Ⅱ辑》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111507341A (en) * 2020-04-20 2020-08-07 广州文远知行科技有限公司 Method, device and equipment for adjusting target bounding box and storage medium
CN111476902A (en) * 2020-04-27 2020-07-31 北京小马慧行科技有限公司 Method and device for labeling object in 3D point cloud, storage medium and processor
CN111476902B (en) * 2020-04-27 2023-10-24 北京小马慧行科技有限公司 Labeling method and device for objects in 3D point cloud, storage medium and processor
CN112099025A (en) * 2020-08-20 2020-12-18 杭州飞步科技有限公司 Method, device and equipment for positioning vehicle under bridge crane and storage medium
CN112099025B (en) * 2020-08-20 2024-04-02 杭州飞步科技有限公司 Method, device, equipment and storage medium for positioning vehicle under bridge crane
CN112598061A (en) * 2020-12-23 2021-04-02 中铁工程装备集团有限公司 Tunnel surrounding rock clustering and grading method
CN112884823A (en) * 2021-01-13 2021-06-01 上海建工四建集团有限公司 Running rule simulation point cloud selection and calculation method for actual measurement application
CN112884823B (en) * 2021-01-13 2023-09-22 上海建工四建集团有限公司 Guiding rule simulation point cloud selection and calculation method for actual measurement actual quantity application
CN113469887A (en) * 2021-09-02 2021-10-01 深圳市信润富联数字科技有限公司 Object digital-to-analog conversion method, device, equipment and storage medium
CN114399550A (en) * 2022-01-18 2022-04-26 中冶赛迪重庆信息技术有限公司 Automobile saddle extraction method and system based on three-dimensional laser scanning
CN115170648A (en) * 2022-06-29 2022-10-11 广西柳工机械股份有限公司 Carriage pose determining method and device
CN115170648B (en) * 2022-06-29 2023-04-07 广西柳工机械股份有限公司 Carriage pose determining method and device

Also Published As

Publication number Publication date
CN110728753B (en) 2022-04-15

Similar Documents

Publication Publication Date Title
CN110728753B (en) Target point cloud 3D bounding box fitting method based on linear fitting
Schauer et al. The peopleremover—removing dynamic objects from 3-d point cloud data by traversing a voxel occupancy grid
CN111929699B (en) Laser radar inertial navigation odometer considering dynamic obstacle and map building method and system
CN112396650B (en) Target ranging system and method based on fusion of image and laser radar
EP3168812B1 (en) System and method for scoring clutter for use in 3d point cloud matching in a vision system
CN109801333B (en) Volume measurement method, device and system and computing equipment
CN109522804B (en) Road edge identification method and system
CN111612728B (en) 3D point cloud densification method and device based on binocular RGB image
CN110555407B (en) Pavement vehicle space identification method and electronic equipment
CN105517677A (en) Depth/disparity map post-processing method and apparatus
CN108280852B (en) Door and window point cloud shape detection method and system based on laser point cloud data
CN108074232B (en) Voxel segmentation-based airborne LIDAR building detection method
JP7050763B2 (en) Detection of objects from camera images
CN113744351B (en) Underwater structure light measurement calibration method and system based on multi-medium refraction imaging
CN107808524B (en) Road intersection vehicle detection method based on unmanned aerial vehicle
CN113205604A (en) Feasible region detection method based on camera and laser radar
CN113281782A (en) Laser radar snow point filtering method based on unmanned vehicle
CN109840463A (en) A kind of Lane detection method and apparatus
Rosero et al. Calibration and multi-sensor fusion for on-road obstacle detection
US10223803B2 (en) Method for characterising a scene by computing 3D orientation
Giosan et al. Superpixel-based obstacle segmentation from dense stereo urban traffic scenarios using intensity, depth and optical flow information
Kochi et al. 3D modeling of architecture by edge-matching and integrating the point clouds of laser scanner and those of digital camera
CN116109701A (en) Object grabbing method based on passive dual-purpose high light reflection
Oniga et al. Fast obstacle detection using U-disparity maps with stereo vision
CN113591640B (en) Road guardrail detection method and device and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant