CN110119751B - Laser radar point cloud target segmentation method, target matching method, device and vehicle - Google Patents

Laser radar point cloud target segmentation method, target matching method, device and vehicle Download PDF

Info

Publication number
CN110119751B
CN110119751B CN201810116188.4A CN201810116188A CN110119751B CN 110119751 B CN110119751 B CN 110119751B CN 201810116188 A CN201810116188 A CN 201810116188A CN 110119751 B CN110119751 B CN 110119751B
Authority
CN
China
Prior art keywords
target
point
matching
targets
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810116188.4A
Other languages
Chinese (zh)
Other versions
CN110119751A (en
Inventor
杨贵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Navinfo Co Ltd
Original Assignee
Navinfo Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Navinfo Co Ltd filed Critical Navinfo Co Ltd
Priority to CN201810116188.4A priority Critical patent/CN110119751B/en
Publication of CN110119751A publication Critical patent/CN110119751A/en
Application granted granted Critical
Publication of CN110119751B publication Critical patent/CN110119751B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/50Systems of measurement based on relative movement of target
    • G01S17/58Velocity or trajectory determination systems; Sense-of-movement determination systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/66Tracking systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Abstract

The application discloses a laser radar point cloud target segmentation method, a target matching method, a device and a vehicle. The target segmentation method comprises the following steps: classifying point clouds obtained by scanning targets by the multi-line laser radar to obtain subclasses of the point clouds of the same target; projecting the point cloud of the subclass onto a two-dimensional plane; acquiring an outer envelope box of each subclass; merging the sub-classes intersected by the outer envelope boxes into one class. The outer envelope box merges subclasses obtained after point cloud segmentation, and a threshold value is not needed when the subclasses are merged, so that the defect that the threshold value is difficult to determine in the prior art is overcome. According to the method, the calculation amount is reduced by changing the point cloud classification method, the targets are brought into the database, the targets in a new frame are compared and identified with the targets in the database, the condition that the targets are lost in some frames is effectively solved, and accidents are avoided.

Description

Laser radar point cloud target segmentation method, target matching method, device and vehicle
Technical Field
The application relates to laser radar point cloud, in particular to a laser radar point cloud target segmentation method, a target matching method, a device and a vehicle.
Background
In the prior art, matching in different image frames is particularly important in order to track a target. The technical scheme adopted by the prior art mainly comprises the following steps: extracting the geometric features of the target, then calculating the geometric similarity between the targets of the two frames before and after, predicting the position in the current frame according to the geometric similarity and the speed of the current frame in combination with the speed of the previous time, and finally realizing the matching of the target between the two frames; then, the distance difference of the same-name points (inflection points, central points and the like) of the same object in the two frames is calculated, and the speed is obtained according to the time between the two frames. Due to the scanning instability of the target shape, such as the position scanned in the previous frame, the target shape can not be scanned in the next frame, and therefore errors occur in the speed calculation; filtering processing is carried out by using a Kalman or particle filtering method, and then object position prediction of the next round is carried out by using the filtered speed data. During target tracking, as the target is assumed to be continuously visible, and the situation that the target cannot be scanned in certain frames often occurs in real data, the algorithm can identify the target as a new target when the target appears again; the velocity calculation is performed by using the target feature (such as the center point, the boundary point, the inflection point, and the like of the point set) as the homonymous point between two frames, but the homonymous point is not accurate because the laser is difficult to scan to the same position in two frames in real data, so that the velocity accuracy obtained by calculation is extremely low. The algorithm using the Kalman filtering has large time delay, and cannot form quick response to a high-speed target.
In order to distinguish the objects, the point cloud needs to be segmented. In the prior art, a general segmentation method of laser point cloud is a region growing method. The method comprises the following specific scheme: and for any point, calculating the distance between all other points, if the distance is less than a threshold value, classifying the points into the same class, and repeating the iteration until all the points are not added into the existing class. In order to reduce the amount of calculation, generally, non-ground point clouds are voxelized by using octree, and a region growing method based on octree voxel grids is adopted for clustering and segmentation. In the prior art, for data with N scans, the calculation complexity of the common region growing method is O (N)2) The calculation amount of the region growing method based on the octree is O (Nlog (N)), the number of laser points in one frame is often 2-20 ten thousand, and therefore the calculation amount of target segmentation of the laser radar point cloud is large. In addition, the octree construction process consumes a certain time and occupies more than 2 times of storage space of the point cloud data. In addition, the region growing-based method has a large number of cases where the object is divided incorrectly due to the irregular shape of the object. For example, for a "convex" target, the distance between two beam echo points of the laser radar is very long, and a threshold set during clustering is too small, so that the same target is misclassified into two targets. If the threshold is adjusted to be larger, two targets with close distances are mistakenly classified into the same class. Therefore, a threshold is used to determine whether the same category is highly unreliable.
Disclosure of Invention
In view of the above, the present application provides a laser radar point cloud target segmentation method, a target matching method, an apparatus and a vehicle, so as to realize point cloud classification and matching of targets in a target library.
The application provides a multi-line laser radar point cloud subclass merging method, which comprises the following steps:
classifying point clouds obtained by scanning targets by the multi-line laser radar to obtain subclasses of the point clouds of the same target;
projecting the point cloud of the subclass onto a two-dimensional plane;
acquiring an outer envelope box of each subclass;
merging the sub-classes intersected by the outer envelope boxes into one class.
The application provides a target matching method, which comprises the following steps:
predicting the displacement of the target according to the current time, the preorder speed and preorder time of the target in the target library;
matching according to the current characteristics of the target, the preorder characteristics of the target in the target library and the predicted target displacement;
wherein the current features of the target and/or the preamble features of the targets in the target library are geometric features of a class obtained according to the method.
The application provides a multi-thread laser radar point cloud sorter includes:
a storage device for storing a program;
a processor for executing the program to implement the method.
The application provides a target matching device, including:
a storage device for storing a program;
a processor for executing the program to implement the method.
A storage device is provided having a program stored thereon for, when executed by a processor, implementing the method.
The application provides a vehicle comprising the device.
According to the method and the device, subclasses obtained after point cloud segmentation are combined through the outer envelope box, a threshold value is not needed during combination, and the defect that the threshold value is difficult to determine in the prior art is overcome. According to the method for changing the point cloud classification, the calculation amount is reduced, the targets are brought into a database, the targets in a new frame are compared and identified according to the targets in the database, the situation that the targets are lost in some frames is effectively solved, accidents are avoided, the previous targets can be identified when the targets are lost and appear again, the motion attribute information in front of the targets cannot be lost, the prejudgment capability of the system on the state change of the targets is enhanced, and the running stability of the system is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a method of object matching as provided herein;
FIG. 2 is a flow chart of the matching provided herein;
FIG. 3 is a schematic illustration of object merging and splitting provided herein;
FIG. 4 is a schematic diagram of target database update and target velocity filtering provided herein;
FIG. 5 is a schematic illustration of a multiline lidar system provided by the present application;
FIG. 6 illustrates a point cloud classification and merging method provided herein;
7-9 are schematic views of laser radar beam scanning provided by the present application;
FIG. 10 is a schematic illustration of the horizontal and vertical angles of the echo point provided by the present application;
FIG. 11 is a schematic view of the wire harness segmentation provided herein;
FIG. 12 is a schematic of the outer envelope box calculation provided herein;
FIG. 13A is a schematic diagram of a distance calculation provided herein;
FIG. 13B is a schematic diagram of an outer envelope box handover provided by the present application;
FIG. 14 is a schematic view of the subclass merging provided herein;
FIG. 15 is a flow chart of object matching and velocity measurement provided herein;
FIG. 16 is a rear-view illustration of a subject vehicle provided by the present application;
fig. 17 is a schematic diagram of a lidar point cloud classification device provided by the present application.
Detailed Description
As used in the specification and in the claims, certain terms are used to refer to particular components. As one skilled in the art will appreciate, manufacturers may refer to a component by different names. This specification and claims do not intend to distinguish between components that differ in name but not function. In the following description and in the claims, the terms "include" and "comprise" are used in an open-ended fashion, and thus should be interpreted to mean "include, but not limited to. "substantially" means within an acceptable error range, and a person skilled in the art can solve the technical problem within a certain error range to substantially achieve the technical effect. The description which follows is a preferred embodiment of the present application, but is made for the purpose of illustrating the general principles of the application and not for the purpose of limiting the scope of the application. The protection scope of the present application shall be subject to the definitions of the appended claims.
In order to reduce the complexity of point cloud segmentation, the method for segmenting the point cloud is changed according to the characteristics of the point cloud. Firstly, resolving input frame laser radar scanning data to obtain coordinates under a local coordinate system, then removing ground points based on the installation height of the equipment and respectively storing point clouds of all wire harnesses in corresponding buffer areas. And finally, calculating outer envelope boxes of the subclasses, merging the subclasses according to whether the envelope boxes are intersected or not, obtaining the laser points of each class to be counted and outputting.
The multiline lidar is shown in fig. 5, and the simulation shows the spatial relationship when 3 laser beams scan simultaneously. In this application, lidar for 2, 3 and more lines are referred to as multiline lidar. The multi-line laser radar rotates towards one direction, each detector in the vertical direction measures distance in the rotating process, and scanning of each line beam is independent and unrelated. The laser radar rotates anticlockwise to complete peripheral three-dimensional scanning, OO' is the longitudinal axis of the laser, and three laser receiving and transmitting units (green, blue and orange) of the laser radar rotate and then scan three circles.
The target of the image frame can be determined according to the classification and combination method provided by the application aiming at the point cloud scanned by each line beam in the multi-line laser radar. The classification and merging method provided by the present application is shown in fig. 6, and specifically includes:
step 605, inputting scan data;
the characteristics of the echo of any one of the beams of the lidar will be described below by way of example. Based on the detection principle of laser radar rotation scanning, the laser radar scans four three-dimensional vehicle targets shown in fig. 7 and guard rails on the roadside in the middle.
The schematic view of the lidar and surrounding targets projected onto a two-dimensional plane is shown in fig. 8, from which it can be seen that due to the line of sight, only two sides of the vehicle close to itself (the lidar) can be scanned. Due to the shielding of vehicles, the guardrails of the road scan a plurality of small sections after being cut off. The echo points obtained by the scanning of the laser radar for the target illustrated in fig. 8 are schematically shown in fig. 9. In fig. 9, the lidar is rotated counterclockwise, which in turn scans: vehicle 1, right railing, vehicle 2, right railing, left railing, vehicle 3, left railing, vehicle 4, total 8 targets. The point clouds scanned by the 8 targets can be segmented and combined as scan data input.
Step 610, calculating coordinates of echo points
According to the values recorded during the laser radar scan: the horizontal angle α of the echo point, the vertical angle β of the echo point, and the distance measurement value D can be calculated as the coordinates of the echo point in the local coordinate system, as shown in fig. 10, and the calculation formula is as follows:
Figure BDA0001570712950000061
and step 615, removing ground points. Creating a structural body (X/y/z/D) array R1, R2, … … and RX (X is the total number of the beams of the laser radar); and judging from the first point, if z is larger than a threshold value-H (the laser radar center is 0 in z under a local coordinate system, H is the laser radar installation height, and the ground point height is-H), storing the value (x/y/z/D) of the current point into an array Ri (i is the wire harness number to which the current point belongs).
Step 620, classifying the point clouds obtained by scanning the target by each line bundle of the multi-line laser radar, and obtaining the subclasses of the point clouds of the same target, wherein the specific process is as shown in fig. 11:
step 1105, firstly, initializing the segmentation, and setting the current initial class number C as 1;
step 1110, begin processing the first wire harness, where point k is 0 (number in the current wire harness);
step 1115, setting the first point class number of the current wire harness as C, and setting the current point number k as 1;
step 1120, determining whether the distance between the next point and the current point is greater than 0.1+ Dt (unit is meter), if so, executing step 1125, otherwise, executing step 1130;
wherein D is the distance measurement value (in meters) of the current point, T is a given threshold value, and is usually set to 0.001 (i.e. the maximum allowable point cloud interval of the same target at the nearest position is 0.1 meter, and the maximum allowable point cloud interval of the same target at 100 meters is 0.2 meter);
step 1125, if the distance between the next point and the current point is greater than 0.1+ D · T, then C is equal to C + 1;
step 1130, set the class number of the next point to C;
step 1135, determining whether all the points are processed completely, if so, executing step 1140, otherwise, executing step 1115;
in step 1140, it is determined whether all scan lines have been processed, and if so, the process is terminated, otherwise, step 1110 is performed.
Taking the point cloud of fig. 9 as an example, the above segmentation method can classify the points as follows:
class 1: point numbers 1-6;
class 2: point numbers 7-16;
class 3: point numbers 17-22;
class 4: point numbers 23-26;
class 5: the dot number is 27-30;
class 6: point numbers 31-36;
class 7: point numbers 37-46;
class 8: dot numbers 47-52.
The spot with the small spot number is scanned by the laser beam. In the point cloud segmentation process, the laser radar point cloud segmentation method based on the same-target echo point adjacent principle can complete the category segmentation of the point cloud at one time only by calculating the distance between the front point and the rear point, and the total calculation amount of subsequent category combination is only O (2N). The direct influence brought by low computation amount, simplicity and reliability is that the algorithm can run quickly, is convenient to be transplanted to a low-cost computing platform and can complete real-time processing on a low-performance computing platform. The calculated amount of the existing point cloud segmentation method is O (N)2) Or O (Nlog (N)), which needs 50ms-100ms to complete the segmentation of one frame of data, and the calculation amount of the method is only O (2N), and the method can complete the whole calculation only in less than 3 ms. For an automatic driving automobile moving at a high speed, the faster the processing speed is, the more the dynamic change around the automobile can be effectively sensed, and the sensitivity of decision making is improved. The laser has the advantages of accurate detection and high stability, and is often used as a main sensor of an automatic driving automobile. In order to deal with the calculation amount of huge point clouds of laser radars, the existing automatic driving automobile needs an advanced CPU or GPU for calculation, and is limited by the fact that the existing automatic driving automobile cannot be commercially popularized at the present stage. The method of the application can realize real-time processing when being transplanted to a low-cost micro control Unit (MCU, such as a single chip microcomputer), because the algorithm is simple, thereby promoting the commercialized landing speed of the automatic driving automobile.
Step 625, calculate the outer envelope box for each subclass. For each subclass, find the point farthest from the straight line formed by the first and last points of the subclass, and then the triangle formed by the three points is used as the outer envelope box. When the outer envelope box is calculated, the point cloud is projected on a two-dimensional plane, and the outer envelope box is determined according to the point cloud which belongs to the same subclass on the two-dimensional plane. As shown in the specific flowchart of fig. 12, the method specifically includes:
step 1205, a straight line L is calculated between P1 and Pn, where the current point k is 1, the initial point number m is 1, and the farthest distance D0 is 0, where the points P1 to Pn on the two-dimensional projection plane (i.e., using only the x and y coordinates of the points) are as shown in fig. 13A.
Step 1210, current point k + +, for class Cij(ith harness)Class jth) of a plurality of pointsk,(Pk∈Cij1 < k < n), calculating the distance D from the point to the line Lk
Step 1215 of determining whether D0 is less than Dk, if yes, then performing step 1220, otherwise performing step 1225;
step 1220, if D0<DkThen order D0=DkAnd record the dot number at that time (m ═ k);
in step 1225, it is determined whether all the points have been processed, if yes, step 1230 is executed, otherwise step 1210 is executed.
Step 1225, find CijThe point P having the largest distance from the straight line, as shown in FIG. 13Am(1 < m < n) is the inflection point of the outer contour, defined as point P1,Pm,PnThe triangle formed by the three points is used as an outer envelope box.
Step 630, merging subclasses according to the outer envelope box; for any two sub-classes, whether there is an inclusion or handover relationship is calculated according to the outer envelope box, if so, the two sub-classes are classified into the same class, as shown in fig. 13B, and this step is iterated until no new class merging event is generated. As shown in fig. 14, the specific steps of sub-class merging specifically include:
step 1405, counting the number N of all subclasses of all wire harnesses, wherein the total number of the subclasses of all wire harnesses is N, creating an array ClassNum [ ] for finally storing the class numbers of all the subclasses, and setting the current class number C to 1;
step 1410, setting the currently pending subclass number k to 1, and setting the class number of subclass 1 to C (i.e. ClassNum [ k ] ═ C, k ═ 1);
step 1415, k is k +1, and m is 0;
step 1420, m is m + 1;
step 1425, determine whether there is a handover relationship between the subclass k and the subclass m, if yes, execute step 1435, otherwise execute step 1430;
step 1430, determining whether m is smaller than k, if so, executing step 1420, otherwise, executing step 1440;
step 1435, determining that the class number of the subclass k is the same as the class number of the subclass m (i.e., ClassNum [ k ] ═ ClassNum [ m ]);
step 1440, C ═ C +1, ClassNum [ k ] ═ C;
step 1445, if k is less than or equal to N, indicating that the merge is complete, proceed to step 1450. Otherwise, merge is continued and step 1415 is performed.
At step 1450, the point outputs of all subclasses with the same class number (ClassNum) are output as a subset.
For any one category k (1 ≦ k ≦ C), all the subclasses are traversed, and if the class number of the subclass is k, all the points of this subclass are output to the set List [ k ].
The outer enveloping boxes formed by combining the subclasses based on the subclasses of the same object on different wire harnesses are based on the principle of mutual inclusion or cross connection, the sub targets of the different wire harnesses can be combined into a final large target without any given threshold, and the phenomenon of mistaken division of the common algorithm by using threshold judgment is avoided. In the prior art, when the threshold is used for merging, the defects that the same target is divided into two types when the threshold is small, but different targets are divided into the same type when the threshold is large exist.
Fig. 1 shows a target matching method provided in the present application, which specifically includes:
105, predicting the displacement of the target according to the current time, the preorder speed and preorder time of the target in the target library;
specifically, the target displacement may be predicted according to the following formula:
Figure BDA0001570712950000101
wherein Dx,DyIs the component of displacement in the x, y axes, Vx,VyIs the velocity component on the x-axis 5 and y-axis, which can be obtained from the target library, T is the time when the target last appearednowIs the current time.
And step 110, matching the current characteristics of the target with the preamble characteristics of the target in the target library and the predicted target displacement.
Specifically, the target matching may be performed according to the following formula:
Figure BDA0001570712950000102
S=S1·S2·S3·S4·S5·S6
Figure BDA0001570712950000103
is a current feature of the object and,
Figure BDA0001570712950000104
is the predicted feature obtained by adding the pre-features of the target in the target library corresponding to the current features of the target and the predicted displacement of the target. For the target, it is possible to follow the center of gravity of the target
Figure BDA0001570712950000105
Determine its scan angle
Figure BDA0001570712950000106
And determining the quadrant where the target is located according to the following rules:
the first quadrant is 0-45 deg. theta, 315-360 deg
The second quadrant has an angle theta of more than or equal to 45 degrees and less than 135 degrees
The third quadrant is that theta is more than or equal to 135 degrees and less than 225 degrees
In the fourth quadrant, theta is larger than or equal to 225 degrees and smaller than 315 degrees.
The target can be divided into an upper area, a lower area, a left area and a right area by two vertically intersected lines passing through the gravity center, characteristic values are respectively calculated, the upper area, the lower area, the left area, the right area and the left area can correspond to the quadrant angles, namely, horizontal x-axis and horizontal y-axis are rotated by 45 degrees, when a coordinate system of the target and a coordinate system of a vehicle are placed on one horizontal line, the quadrant arrangement of the two coordinate systems is centrosymmetric, and 4 current characteristics corresponding to the upper area, the lower area, the left area, the right area and the left area respectively correspond to a fourth quadrant, a second quadrant, a first quadrant and a third quadrant.
After the quadrant is determined, matching is performed by taking the feature of the area corresponding to the quadrant as the feature of the target, for example, if the target is determined to be in the 4 th quadrant, similarity calculation is performed on the feature of the upper part area of the target in the target library and the feature of the currently scanned target point cloud, and the matched target in the target library is determined.
Fig. 2 shows a matching flowchart provided in the present application, where m is the number of targets obtained by classification in a current image frame, and specifically includes:
step 205, initializing i to 0;
step 210, i is i + 1;
step 215, determining whether i is greater than m, if yes, ending, otherwise executing step 220;
step 220, X is 0, Smax is 0, and Nmax is 0, where X denotes the number of the target in the target library, Smax denotes the optimal feature matching value, and Nmax denotes the number of the target in the target library corresponding to the optimal feature;
step 225, X ═ X + 1;
step 230, determining whether X is greater than N0, NO represents the total number of targets in the target library, if so, executing step 210, otherwise executing step 235;
step 235, for target CiCalculating the target-history database target presence based on the target's quadrant value Q and the matching weight W
Figure BDA0001570712950000111
xmin,xx,xmax,xx,yminxx,ymaxxxSimilarity S in these 6 dimensions1,S2,S3,S4,S5,S6
Step 240, using 6-dimensional features S1,S2,S3,S4,S5,S6And calculating matching characteristics S:
S=S1·S2·S3·S4·S5·S6
step 245, traverse all the targets in the target library to obtain target CiWith each object (Tar) in the library1,Tar2...) the maximum value S of the similarity coefficientmaxAnd the target number N in the library at that timemax
If S ismaxIf > 1, target CiClass number of Nmax(ii) a Else target CiClass number 0,0 indicates that the current target fails to find a match in the target library, and in subsequent steps, the target with class number 0 will be added to the target library as a new target.
Optionally, the target in the image frame may be merged, and the target in the target library may be split. Fig. 3 shows a schematic diagram of target merging and splitting provided by the present application, which specifically includes:
step 305, the target C existing in the current frameiAnd Cj
Step 305, determine target CiClass number N ofiAnd target CjClass number N ofjIf not, ending, otherwise, executing step 315;
and step 315, calculating the characteristic similarity characteristic S according to the pixel values Q and the matching weight W of the two class targets.
Step 320, determining whether S is greater than 1, if so, executing step 325, otherwise, executing step 330;
step 325, adding CiAnd CjAll points of (a) come together;
step 330, splitting: n +1, creating a target TarN
Calculating CiAnd CjCharacteristic direction L of (a):
Figure BDA0001570712950000121
in the above formula, the first and second carbon atoms are,
Figure BDA0001570712950000122
is represented by CiThe center of gravity of (a) is,
Figure BDA0001570712950000123
is represented by CjThe center of gravity of (1).
Step 335, determine if L is greater than 1, if yes, go to step 345, otherwise go to step 340.
Step 340, splitting up and down: will TarXPoint concentration of
Figure BDA0001570712950000124
All points of (2) are sheared to TarNMiddle part,
Figure BDA0001570712950000125
All points of (2) are sheared to TarXIn (1). After the processing is completed, the Tar is updatedNAnd TarXThe characteristic value of (2). If it is not
Figure BDA0001570712950000126
Then C isiClass number X, CjThe class number is N; otherwise CiClass number N, CjThe class number is X.
Step 345, left-right splitting: will TarXPoint concentration of
Figure BDA0001570712950000131
All points of (2) are sheared to TarNMiddle part,
Figure BDA0001570712950000132
All points of (2) are sheared to TarXIn (1). After the processing is completed, the Tar is updatedNAnd TarXThe characteristic value of (2). If it is not
Figure BDA0001570712950000133
Then C isiClass number of Ni、CjClass number of (1) is N; otherwise CiClass number N, CjClass number of Ni
Step 350, update CiAnd CjClass number of (2).
When the threshold value is small, the point cloud segmentation method can cause the same target to be wrongly divided into two types, and the middle area of the object is wrongly regarded as a passable area, so that the automatic driving automobile is directly provided with the barrier to cause accidents. The present application can prevent the occurrence of similar problems by avoiding this drawback by merging classes. In addition, the point cloud segmentation method may cause two targets to be mistakenly classified into the same class when the threshold value is large, and a passable area in front of the vehicle is regarded as a middle area of a large target instead of passing, so that the automatic driving automobile needs to be braked and decelerated, riding comfort is reduced, and the automatic driving automobile cannot be decelerated or stopped at intervals under high-speed movement safely. This defect is avoided through the split of class to this application for the vehicle can stably move ahead under certain speed.
Optionally, the present application further updates a speed of the target in the target library, performs matching between the target in the database and the target matched by the current frame based on an ICP matching algorithm, calculates displacement amounts in X and Y directions, and divides the displacement amounts by a time difference (a time of the current frame minus a time at which the target appears last time) to obtain the speed, which specifically includes:
(1) initializing, i is 0;
(2) i ═ i + 1. If i > m, the process ends. If C is presentiClass number of 0, repeat this step. Otherwise, entering the step (3);
(3) using Iterative Close Point (ICP) matching method for CiAnd TarNi(NiIs CiIs successfully matched in the target repository, TarNiRepresenting the sum of C in the target libraryiSuccessfully matched target) to obtain the displacement relation < delta x of the point seti,Δyi>。
(4) Calculating TarNiSpeed of (2):
Figure BDA0001570712950000141
Tnowfor the current time, T is the target TarNiThe time of the last occurrence, which can be retrieved from the target repository.
The method and the device use the ICP matching method to calculate the displacement relation between the object point sets between the two frames, and calculate the speed more accurately. ICP uses the form of a point set for matching and obtaining displacement, and the precision is higher than that of a common characteristic point displacement calculation method. The speed can be accurately calculated, the decision-making error caused by inaccurate speed calculation is avoided, automatic car following can be realized, and the riding experience of the car is reduced.
Optionally, the present application further updates the target database and filters the target speed, and the specific flow is as shown in fig. 4:
step 405, processing any object C in the current image framei
Step 410, update operation initialization, determine target CiIf it is equal to 0, step 420 is performed, otherwise step 415 is performed;
step 415, update the targets in the target repository. For object C in current frameiIf C is presentiIs not 0 (in this case C)iClass number Ni) Then use CiPoint cloud and eigenvalue replacement Tar ofNiVelocity < vx,i,vy,iAddition of TarNiThe time is the time of the current frame;
at step 420, a new target is created into the target library. If C is presentiIs 0, then N equals N +1, creating the target TarN
Step 425 assigns the new target value obtained in step 420. C is to beiPoint cloud and feature value of (2) assigning TarN,TarNSpeed is set to < 0,0 >, TarNSetting the target appearance time as the time of the current frame;
step 430, determining whether all classes are processed, if so, executing step 435, otherwise, processing other classes of the current frame;
step 435, determining any one target j in the target library;
step 440, obtaining the jth target Tar in the target libraryjTime T ofjCalculating the time difference Δ T from the current frame: Δ T ═ Tnow-Tj
Step 445, determining whether Δ T is greater than 1.5s, if yes, executing step 450, otherwise, executing step 455;
step 450, considering the target no longer appears, deleting tar (j);
step 455, confirming that all targets are processed, if yes, executing step 460, otherwise, processing other targets in the target library;
the velocities of the individual objects in the object library are filtered using Savitzky-Golay, step 460.
According to the method, the Savitzky-Golay filtering method is used for filtering the speed of the target, the obtained speed cannot be delayed obviously, the continuous tracking capability of the target with large acceleration change is improved, the prejudgment precision of the position of the target is improved, the safety is improved, the radar sensor does not need to be additionally arranged on the vehicle, and the system configuration cost is reduced.
The point cloud segmentation and combination can be realized through the process, the calculation amount of point cloud processing is reduced, and the real-time performance of data processing is improved.
Fig. 15 shows a target matching and velocity measurement flowchart provided in the present application, which specifically includes:
step 1505, receiving laser radar data;
1510, point cloud clustering segmentation;
1515, extracting geometric features;
1520, acquiring the motion state of the self-vehicle, specifically comprising speed and direction;
step 1525, predicting target displacement according to the target speed, the last time and the current time in the target library;
step 1530, matching according to the extracted geometric features, the geometric features of the target in the target library and the predicted target displacement;
step 1535, merging or splitting the targets;
step 1540, update the target repository;
1545, calculating the target displacement and speed, for example, using ICP matching algorithm;
in step 1550, Savitzky-Golay filtering is performed on the target velocity, and the target velocity may be updated into a target library.
The target is incorporated into a database, and the target of a new frame is compared and identified with the target in the database, so that the condition that the target is lost in some frames is effectively solved. Under extreme conditions, the sensor may not scan the target, and the method of incorporating the target database ensures that the automatic driving automobile can still prejudge that the corresponding position has the obstacle (target) even if the target does not appear temporarily in the current frame, thereby avoiding the occurrence of accidents. The method and the device can identify the reoccurring target to be the same as the previous target when the target reoccurring after being lost, the motion attribute information before the target cannot be lost, the prejudgment capability of the system on the state change of the target is enhanced, and the running stability of the system is improved. Aiming at the scanning discontinuity phenomenon caused by the shielding of target identification, re-clustering is carried out through matching, and the targets are prevented from being wrongly classified into two classes
In step 1515, the geometric features may be extracted through the following process:
fig. 16 is a schematic diagram showing scanning results of stages from rear overtaking to front overtaking of a rectangular target vehicle. In fig. 16, light boxes indicate vehicles, and black dots indicate laser scanning points. The lidar can only scan the target head at stage 11 (target right behind), the lidar can scan the target head and right at stage 12 (target left behind), the lidar can only scan the target vehicle right at stage 13 (target left front), the lidar can scan the target vehicle tail and right at stage 14 (target left front), and the lidar can scan the target vehicle tail at stage 15 (target right front).
As can be seen from fig. 16, the scanning results of the target at different orientations are very different. Therefore, the tracking/velocity measurement method using the center point or the length/width of the target will bring large errors. The geometric characteristics of the target are characterized by adopting the maximum value, the minimum value and the average value of the horizontal axis and the vertical axis and the center of gravity of the target.
In the first step, the maximum value and the minimum value x of the x and y axes of the point cloud of each category can be calculatedmax,xmin,ymax,ymin
Secondly, calculating the gravity center of each point cloud, namely each object, according to the point cloud of each categoryTarget center of gravity
Figure BDA0001570712950000171
Figure BDA0001570712950000172
In the formula, n is the number of points in the class, < xi,yiThe coordinates of the ith point in the class. For each segmented class, the center of gravity is calculated as above.
Thirdly, dividing the target into four parts of upper, lower, left and right, and calculating the characteristic value of each part
Figure BDA0001570712950000173
xmin,xx,xmax,xx,yminxx,ymaxxxThe four partial calculations are as follows:
region 1 (left part) is smaller than x by the value of xmin+0.5 calculation, i.e. for any point i of the class, if xi<xmin+0.5, the average, maximum and minimum of x and y are counted
Figure BDA0001570712950000174
xmin,1,xmax,1,ymin,1,ymax,1
Region 2 (right part) has x greater than xmax0.5 calculation, i.e. for any point i of the class, if xi>xmax0.5, then calculate the mean, maximum and minimum of x and y
Figure BDA0001570712950000175
xmin,2,xmax,2,ymin,2,ymax,2
Region 3 (upper part) has x smaller than ymin+0.5 calculation, i.e. for any point i of the class, if yi<ymin+0.5, the average, maximum and minimum of x and y are counted
Figure BDA0001570712950000176
xmin,3,xmax,3,ymin,3,ymax,3
Region 4 (lower portion) is smaller than y in x valuemax0.5 calculation, i.e. for any point of the class, if yi>ymax0.5, then calculate the mean, maximum and minimum of x and y
Figure BDA0001570712950000177
xmin,4,xmax,4,ymin,4,ymax,4
In addition, a scanning direction θ in which the head direction is 0 direction and the clockwise direction is positive (i.e., 90 ° on the right side, 180 ° on the rear side, 270 ° on the left side, and 0 °/360 ° on the front side) may be defined. Calculating the scanning direction of each target according to the coordinates of the gravity center point of the target:
Figure BDA0001570712950000181
as shown in fig. 17, the present application provides a lidar point cloud classification apparatus, which includes a processor 1705 and a storage device 1710. The storage device stores a program, and when the program processor is executed, the laser radar point cloud classification method provided by the application can be realized.
Accordingly, the present application provides an object matching apparatus, the structure of which can refer to fig. 17, and when a program on a storage device is executed by a processor, the object matching apparatus can implement the object matching method provided by the present application. Optionally, the target matching device may include a prediction module for predicting target displacement according to the current time and the preamble speed and preamble time of the target in the target library; and the matching module is used for matching the current characteristics of the target with the preorder characteristics of the target in the target library and the predicted target displacement. Optionally, the apparatus further comprises a classification module configured to classify the point cloud according to a distance between adjacent points and a distance threshold between adjacent points. Optionally, the apparatus further comprises a merging module for determining an outer envelope box of the classified points on the two-dimensional projection plane; the classes where the outer envelope boxes intersect are merged. The matching module is mainly used for calculating the similarity between the target in the current frame and the target in the target library, so that the target in the target library corresponding to the target in the current frame is determined, and the matching precision is improved through the selection of the geometric features. In addition, the matching module can also realize the merging and splitting of the target, can also calculate the target speed and carry out Savitzky-Golay filtering on the speed.
Accordingly, the present application provides a vehicle, which may include the laser radar point cloud target segmentation device and/or the target matching device provided by the present application, for implementing assistant driving of the vehicle. For example, a control system of a vehicle utilizes the laser radar target segmentation device provided by the application to realize target segmentation around the vehicle, and matches a newly appeared target with a target in an original database, so as to realize tracking and speed measurement of the target, such as application of automatic following.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The foregoing specification illustrates and describes several specific embodiments of the application, but as aforementioned, it is to be understood that the application is not limited to the forms disclosed herein, and is not to be taken as an exclusion of other embodiments, and that it may be used in various other combinations, modifications, and environments and may be modified within the scope of the application concept by the above teachings or the skill or knowledge of the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the application, which is to be protected by the claims appended hereto.

Claims (10)

1. A multiline laser radar target segmentation method is characterized by comprising the following steps:
classifying point clouds obtained by scanning targets by the multi-line laser radar based on the same-target echo point adjacent principle to obtain subclasses of the point clouds of the same target;
projecting the point cloud of the subclass onto a two-dimensional plane;
acquiring an outer envelope box of each subclass;
merging the sub-classes intersected by the outer envelope boxes into one class.
2. The method of claim 1, wherein the obtaining the outer envelope box for each child class comprises:
determining a first point and a last point belonging to each subclass on the two-dimensional plane according to the anticlockwise direction, and connecting the first point and the last point into a line segment;
determining a point of the remaining points of each sub-class that is farthest from the line segment;
and taking a triangle formed by the point farthest from the line segment and the first point and the last point as an outer envelope box of each class.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
calculating the distance between two adjacent points in the point cloud;
and under the condition that the distance meets a preset condition, dividing the two adjacent points into the same subclass.
4. The method according to claim 3, wherein the preset condition is that the distance between two adjacent points is less than or equal to 0.1+ D.T meter, where D is the distance between the point scanned first and the lidar, and T is a preset coefficient.
5. A method of object matching, comprising:
predicting the displacement of the target according to the current time, the preorder speed and preorder time of the target in the target library;
matching according to the current characteristics of the target, the preorder characteristics of the target in the target library and the predicted target displacement;
wherein the current features of the object and/or the preceding features of objects in the object library are geometric features of a class obtained according to the method of any one of claims 1-4.
6. The method of claim 5, wherein matching based on current features of the target with precursor features of the target in the target library and the predicted target displacement comprises:
calculating a first matching feature of the target according to the current feature of the target;
calculating a second matching characteristic of the target according to the preorder characteristics of the target in the target library and the predicted target displacement;
determining a matched target in a target library according to the first matching feature and the second matching feature; or
The method further comprises the following steps:
and merging or splitting the targets in the target library according to the first matching features of the targets and the second matching features of the targets.
7. A multiline laser radar point cloud target segmentation device is characterized by comprising:
a storage device for storing a program;
a processor for executing the program to implement the method of any one of claims 1 to 4.
8. An object matching apparatus, comprising:
a storage device for storing a program;
a processor for executing the program to implement the method of any one of claims 5 to 6.
9. A storage device having a program stored thereon, wherein the program is adapted to perform the method of any of claims 1-4 and/or the method of any of claims 5-6 when executed by a processor.
10. A vehicle, characterized in that it comprises a device according to claim 7 and/or a device according to claim 8.
CN201810116188.4A 2018-02-06 2018-02-06 Laser radar point cloud target segmentation method, target matching method, device and vehicle Active CN110119751B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810116188.4A CN110119751B (en) 2018-02-06 2018-02-06 Laser radar point cloud target segmentation method, target matching method, device and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810116188.4A CN110119751B (en) 2018-02-06 2018-02-06 Laser radar point cloud target segmentation method, target matching method, device and vehicle

Publications (2)

Publication Number Publication Date
CN110119751A CN110119751A (en) 2019-08-13
CN110119751B true CN110119751B (en) 2021-07-20

Family

ID=67519935

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810116188.4A Active CN110119751B (en) 2018-02-06 2018-02-06 Laser radar point cloud target segmentation method, target matching method, device and vehicle

Country Status (1)

Country Link
CN (1) CN110119751B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110766719B (en) * 2019-09-21 2022-11-18 北醒(北京)光子科技有限公司 Target tracking method, device and storage medium
CN113689471B (en) * 2021-09-09 2023-08-18 中国联合网络通信集团有限公司 Target tracking method, device, computer equipment and storage medium
CN113850995B (en) * 2021-09-14 2022-12-27 华设设计集团股份有限公司 Event detection method, device and system based on tunnel radar vision data fusion

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463871A (en) * 2014-12-10 2015-03-25 武汉大学 Streetscape facet extraction and optimization method based on vehicle-mounted LiDAR point cloud data
CN105957076A (en) * 2016-04-27 2016-09-21 武汉大学 Clustering based point cloud segmentation method and system
CN106778749A (en) * 2017-01-11 2017-05-31 哈尔滨工业大学 Based on the touring operating area boundary extraction method that concentration class and Delaunay triangles are reconstructed
CN107025323A (en) * 2016-12-29 2017-08-08 南京南瑞信息通信科技有限公司 A kind of transformer station's fast modeling method based on ATL

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9311756B2 (en) * 2013-02-01 2016-04-12 Apple Inc. Image group processing and visualization

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463871A (en) * 2014-12-10 2015-03-25 武汉大学 Streetscape facet extraction and optimization method based on vehicle-mounted LiDAR point cloud data
CN105957076A (en) * 2016-04-27 2016-09-21 武汉大学 Clustering based point cloud segmentation method and system
CN107025323A (en) * 2016-12-29 2017-08-08 南京南瑞信息通信科技有限公司 A kind of transformer station's fast modeling method based on ATL
CN106778749A (en) * 2017-01-11 2017-05-31 哈尔滨工业大学 Based on the touring operating area boundary extraction method that concentration class and Delaunay triangles are reconstructed

Also Published As

Publication number Publication date
CN110119751A (en) 2019-08-13

Similar Documents

Publication Publication Date Title
Ferryman et al. Visual surveillance for moving vehicles
JP6657789B2 (en) Image processing device, imaging device, device control system, frequency distribution image generation method, and program
CN110647835A (en) Target detection and classification method and system based on 3D point cloud data
JP5145585B2 (en) Target detection device
CN110632617B (en) Laser radar point cloud data processing method and device
JP2015207281A (en) Solid detector, solid detection method, solid detection program, and mobile body equipment control system
CN110119751B (en) Laser radar point cloud target segmentation method, target matching method, device and vehicle
CN110674705A (en) Small-sized obstacle detection method and device based on multi-line laser radar
US20230386225A1 (en) Method for Determining a Drivable Area
KR100979726B1 (en) Method of image vehicle detection using feature of moving object
CN111722249B (en) Object recognition device and vehicle control system
CN111175730A (en) Millimeter wave radar target trace condensing method for unmanned ship
US20220171975A1 (en) Method for Determining a Semantic Free Space
JP6546548B2 (en) Collision determination device, collision determination method, and program
CN111591288B (en) Collision detection method and device based on distance transformation graph
CN114325755A (en) Retaining wall detection method and system suitable for automatic driving vehicle
CN113008296A (en) Method and vehicle control unit for detecting a vehicle environment by fusing sensor data on a point cloud plane
CN115523935A (en) Point cloud ground detection method and device, vehicle and storage medium
CN116311127A (en) Road boundary detection method, computer equipment, readable storage medium and motor vehicle
Pavelka et al. Lidar based object detection near vehicle
JP6962726B2 (en) Track recognition device
CN116863325A (en) Method for multiple target detection and related product
JP7165630B2 (en) Recognition system, vehicle control system, recognition method, and program
CN114627264A (en) Method for determining direction of target and method and device for tracking target
EP4336466A2 (en) Method and apparatus for modeling object, storage medium, and vehicle control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant