CN110119751A - Laser radar point cloud Target Segmentation method, target matching method, device and vehicle - Google Patents
Laser radar point cloud Target Segmentation method, target matching method, device and vehicle Download PDFInfo
- Publication number
- CN110119751A CN110119751A CN201810116188.4A CN201810116188A CN110119751A CN 110119751 A CN110119751 A CN 110119751A CN 201810116188 A CN201810116188 A CN 201810116188A CN 110119751 A CN110119751 A CN 110119751A
- Authority
- CN
- China
- Prior art keywords
- target
- point
- subclass
- point cloud
- laser radar
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/50—Systems of measurement based on relative movement of target
- G01S17/58—Velocity or trajectory determination systems; Sense-of-movement determination systems
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/66—Tracking systems using electromagnetic waves other than radio waves
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Computer Networks & Wireless Communication (AREA)
- Theoretical Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Traffic Control Systems (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
This application discloses laser radar point cloud Target Segmentation method, target matching method, device and vehicles.The Target Segmentation method includes: to classify to the point cloud of multi-line laser radar scanning Target Acquisition, obtains the subclass of same target point cloud;The point cloud of the subclass is projected on two-dimensional surface;Obtain the external envelope box of each subclass;The subclass that the external envelope box intersects is merged into a class.The application external envelope box merges the subclass after point cloud segmentation, and threshold value is not necessarily to when subclass merges, avoids the defect that the prior art is difficult to threshold value.The application passes through change point cloud classifications method, operand is reduced, and target is included in a database, identification is compared according to the target in database in the target of a new frame, target has been successfully managed the case where certain frames go out active, has avoided the generation of accident.
Description
Technical field
This application involves laser radar point cloud more particularly to a kind of laser radar point cloud Target Segmentation methods, object matching
Method, apparatus and vehicle.
Background technique
In the prior art, particularly important to the matching in different images frame in order to be tracked to target.The prior art is adopted
The technical solution taken specifically includes that the geometrical characteristic for extracting target, then calculates the geometric similarity between the target of two frames of front and back
Degree, it is final to realize target between two frames according to geometric similarity degree, the position being incorporated in the prediction of speed present frame at preamble moment
Matching;The range difference of the same object same place (inflection point, central point etc.) in two frames is then calculated, and is sentenced between two frames
Time obtains speed.Due to the time flutter of target shape, such as the position that previous frame scans, next frame not necessarily
It can scan and arrive, therefore will lead to speed and calculate existing error;It is filtered using Kalman or particle filter method, then
It is predicted using the object space that filtered speed data carries out next round.In target following, as it is assumed that target continuously may be used
See, and often occur in real data target certain frame scans less than the case where, when target occurs again, algorithm can be by it
It is identified as new target;Use target signature (such as the central point of point set, boundary point, inflection point etc.) as same between two frames
Famous cake carry out speed calculating, but due in real data laser be difficult in two frame scans to the same position, it is of the same name
Point inaccuracy, the velocity accuracy obtained so as to cause resolving are extremely low.Use algorithm there are time delays big, the nothing of Kalman filtering
Method forms quick response to high-speed target.
In order to distinguish target, need to be split a cloud.In the prior art, the general dividing method of laser point cloud is area
Domain growth method.This method calculates other all the points distance therewith, if distance is less than the specific scheme is that for any one point
Threshold value is then classified as same class, iterates until all the points are added without in existing class.In order to reduce calculation amount, generally use will be non-
Ground point cloud carries out voxelization using Octree, carries out cluster point using the region growing method based on Octree voxel grid
It cuts.In the prior art, for there is the data of N number of scanning, common region growth method computation complexity is O (N2), and be based on
The region growth method calculation amount of Octree is O (Nlog (N)), and the quantity of a frame laser point often has at 20,000-20 ten thousand points, therefore into
The Target Segmentation calculation amount of row laser radar point cloud is very big.In addition, Octree achievement process need to consume the regular hour, occupy it is super
Cross the memory space of point cloud data 2 times or more.Further, since the irregular situation of the shape of target, the side increased based on region
Method the case where there are a large amount of target partition errors.For example, for " convex " shape target, two harness of laser radar are returned
The distance between wave point is all very remote, and the threshold value that when cluster is arranged is too small to be divided into two targets the same target mistake.If
Threshold value is tuned up, and will lead to and same class is divided by mistake apart from two close targets.Therefore, threshold value is used to determine whether being same
One classification is extremely unreliable.
Summary of the invention
In view of this, the application provides a kind of laser radar point cloud Target Segmentation method, target matching method, device and vehicle
, to realize the matching of target in point cloud classifications and object library.
This application provides a kind of multi-line laser radar point cloud subclass merging methods, comprising:
Classify to the point cloud of multi-line laser radar scanning Target Acquisition, obtains the subclass of same target point cloud;
The point cloud of the subclass is projected on two-dimensional surface;
Obtain the external envelope box of each subclass;
The subclass that the external envelope box intersects is merged into a class.
This application provides a kind of target matching methods, comprising:
According to the preamble speed of target in current time and object library and preamble time, displacement of targets is predicted;
It is matched according to the current signature of target with the displacement of targets of the preceding sequence characteristics of target in object library and prediction;
Wherein, the preceding sequence characteristics of target are to be obtained according to the method in the current signature and/or object library of the target
The geometrical characteristic of the class arrived.
This application provides a kind of multi-line laser radar point cloud classifications devices, comprising:
Equipment is stored, for storing program;
Processor realizes the method for executing described program.
This application provides a kind of object matching devices, comprising:
Equipment is stored, for storing program;
Processor realizes the method for executing described program.
This application provides a kind of storage equipment, are stored thereon with program, described program is for real when being executed by processor
The existing method.
This application provides a kind of vehicles, including the device.
The application merges the subclass after point cloud segmentation by external envelope box, and threshold value is not necessarily to when merging, is avoided
The prior art is difficult to the defect of threshold value.The application changes point cloud classifications method, reduces operand, and target is included in one
In a database, identification is compared according to the target in database in the target of a new frame, has successfully managed target certain
Frame goes out the case where active, avoids the generation of accident, and can occur again after target loss when is identified before being
Target, the movement properties information before target will not lose, enhance the ability to predict that system changes dbjective state, improve
The stability of system operation.
Detailed description of the invention
The drawings described herein are used to provide a further understanding of the present application, constitutes part of this application, this Shen
Illustrative embodiments and their description please are not constituted an undue limitation on the present application for explaining the application.In the accompanying drawings:
Fig. 1 is target matching method provided by the present application;
Fig. 2 is matching flow chart provided by the present application;
Fig. 3 is that target provided by the present application merges and splits schematic diagram;
Fig. 4 is that target database provided by the present application updates and target velocity filters schematic diagram;
Fig. 5 is multi-line laser radar schematic diagram provided by the present application;
Fig. 6 is point cloud classifications provided by the present application and merging method;
Fig. 7-Fig. 9 is laser radar harness scanning schematic diagram provided by the present application;
Figure 10 is the horizontal angle and vertical angle schematic diagram of echo point provided by the present application;
Figure 11 is harness segmentation schematic diagram provided by the present application;
Figure 12 is that external envelope box provided by the present application calculates schematic diagram;
Figure 13 A is that distance provided by the present application calculates schematic diagram;
Figure 13 B is external envelope box handover schematic diagram provided by the present application;
Figure 14 is that subclass provided by the present application merges schematic diagram;
Figure 15 is object matching provided by the present application and the flow chart that tests the speed;
Figure 16 is target vehicle provided by the present application to overtake other vehicles schematic diagram from rear;
Figure 17 is laser radar point cloud sorter schematic diagram provided by the present application.
Specific embodiment
As used some vocabulary to censure specific components in the specification and claims.Those skilled in the art answer
It is understood that hardware manufacturer may call the same component with different nouns.This specification and claims are not with name
The difference of title is as the mode for distinguishing component, but with the difference of component functionally as the criterion of differentiation.Such as logical
The "comprising" of piece specification and claim mentioned in is an open language, therefore should be construed to " include but do not limit
In "." substantially " refer within the acceptable error range, those skilled in the art can within a certain error range solve described in
Technical problem basically reaches the technical effect.Specification subsequent descriptions are to implement the better embodiment of the application, so described
Description is being not intended to limit the scope of the present application for the purpose of the rule for illustrating the application.The protection scope of the application
As defined by the appended claims.
In order to reduce the complexity of point cloud segmentation, the characteristics of the application is according to cloud itself, the method for changing segmentation.It is first
First a frame laser radar scanning data of input are resolved to obtain the coordinate under local coordinate system, is then based on the peace of equipment
Dress height rejects ground point and the point cloud of each harness is stored in corresponding buffer area respectively.The point cloud of different harness is distinguished again
It is split to obtain subclass, finally calculates the external envelope box of each subclass, the merging for carrying out subclass whether is intersected according to envelope box,
The laser point for obtaining counting each class is exported.
Multi-line laser radar is as shown in figure 5, spatial relationship when 3 laser beams of simulation shows scan simultaneously.The application
In, 2 lines, 3 lines and more multi-thread laser radar are referred to as multi-line laser radar.Multi-line laser radar radar is toward the direction
It is rotated, each detector during rotation in vertical direction carries out ranging, and the scanning between each harness is mutual
It is independent, it is not associated with.Laser radar rotates the stereoscan completed to surrounding counterclockwise, and OO ' is laser longitudinal axis, thirdly
What a laser transmitting-receiving unit (green, blue, orange) scanned after rotation is three circles.
It, can be according to classification provided by the present application and merging for the point cloud of each harness scanning in multi-line laser radar
Method determines the target of picture frame.Classification provided by the present application and merging method are as shown in fig. 6, specifically include:
Step 605, scan data is inputted;
Below by the feature for the echo for illustrating any one harness of laser radar.Based on laser radar rotary scanning
Detection principle, for the protective fence of four three-dimensional vehicle targets shown in Fig. 7 and roadside, laser radar is intermediate right
It is scanned.
Laser radar and surrounding objects project to the schematic diagram on two-dimensional surface as shown in figure 8, from this figure, it can be seen that by
In the relationship of sight, vehicle can only be scanned close to two sides of itself (laser radar).And blocking due to vehicle, road
What the guardrail on road scanned is multiple segments after being separated.For target illustrated in Figure 8, what laser radar scanning obtained is returned
Wave point signal is as shown in Figure 9.In Fig. 9, laser radar rotates counterclockwise, successively scanning arrive: vehicle 1, the right railing, vehicle 2,
The right railing, left side railing, vehicle 3, left side railing, vehicle 4, totally 8 targets.The point that 8 targeted scans can be obtained
Cloud is split and merges as scan data input.
Step 610, echo point coordinate is calculated
The value recorded when according to laser radar scanning: the horizontal angle α of echo point, the vertical angle β of echo point, distance measurement value D, it can
To calculate coordinate of the echo point under local coordinate system, as shown in Figure 10, calculation formula is as follows:
Step 615, ground point is rejected.Creating structural body (x/y/z/D) array R1, R2 ... ..., RX, (X is laser radar
Harness sum);Judged since first point, if z is greater than threshold value-H, (laser radar center is under local coordinate system
Z is that 0, H is laser radar mounting height, then spot height is-H), then the value (x/y/z/D) of current point is stored in array Ri (i
For the number of harness belonging to current point).
Step 620, classify to the point cloud of multi-line laser radar each harness scanning Target Acquisition, obtain same target
The subclass of point cloud, detailed process are as shown in figure 11:
Step 1105, the initialization divided first, if current initial class-mark C is 1;
Step 1110, first harness is started to process, point k is 0 (number in current harness);
Step 1115, the first class-mark of current harness is set as C, current point number k is 1;
Step 1120, determine whether next point and current point distance are greater than 0.1+DT (unit is rice), if it does,
Step 1125 is executed, it is no to then follow the steps 1130;
In formula, D is the distance measurement value (unit is rice) of current point, and T is given threshold value, is typically set to 0.001 (i.e. most nearby
It is divided between maximum allowable cloud of the same target between maximum allowable cloud of same target at 0.1 meter, 100 meters and is divided into 0.2
Rice);
Step 1125, if next point and current point distance are greater than 0.1+DT, C=C+1;
Step 1130, the class-mark of next point is set as C;
Step 1135, judge whether all the points handle completion, if processing is completed, then follow the steps 1140, otherwise execute
Step 1115;
Step 1140, judge whether all scan lines handle completion, if processing is completed, terminate, it is no to then follow the steps
1110。
By taking the point cloud of Fig. 9 as an example, each point can be classified as follows by above-mentioned dividing method:
Class 1: point number 1-6;
Class 2: point number 7-16;
Class 3: point number 17-22;
Class 4: point number 23-26;
Class 5: point number 27-30;
Class 6: point number 31-36;
Class 7: point number 37-46;
Class 8: point number 47-52.
The small point of above-mentioned point number is first arrived by laser rays beam scanning.Above-mentioned point cloud segmentation process is faced based on same target echo point
Connect the laser radar point cloud dividing method of principle, it is only necessary to which the class of a cloud can disposably be completed by calculating the distance between front and back point
Do not divide, in addition subsequent categories combination the amount of calculation is only O (2N).Operand is low, simple and reliable bring directly affects just
It is that algorithm can quickly run, is conveniently transplanted to low cost operation platform, can also complete in real time on low performance operation platform
Processing.Existing point cloud segmentation method calculation amount is O (N2) or O (Nlog (N)), need 50ms-100ms that could complete a frame data
Segmentation, and herein described method calculation amount is only O (2N), it is only necessary to whole resolvings can be completed less than 3ms.For high speed
For the autonomous driving vehicle of movement, processing speed it is more fast can dynamic change around effectively perceive, promote the spirit of decision
Sensitivity.Laser has the advantage that detection is accurate, stability is high, often as the master reference of autonomous driving vehicle.It is existing
Autonomous driving vehicle needs more advanced CPU or GPU to be calculated to cope with the calculation amount of huge cloud of laser radar,
Commercialization popularization can not be carried out at this stage by being limited to cost.The present processes are cheap being transplanted to since algorithm is simple
Micro-control unit (Microcontroller Unit, MCU, such as single-chip microcontroller) on when also can be realized real-time processing, thus energy
The commercialization of autonomous driving vehicle is enough pushed to land speed.
Step 625, the external envelope box of each subclass is calculated.For each subclass, find from such first point and last
One point constitutes that farthest point of straight line, and then the triangle of 3 points of compositions is as external envelope box.When calculating external envelope box,
A cloud is projected on two-dimensional surface, external envelope box is determined according to the point cloud for belonging to the same subclass on two-dimensional surface.Specific stream
Shown in journey Figure 12, specifically include:
Step 1205, the straight line L, current point k=1, initial point number m=1, maximum distance D0=0 of P1 and Pn are calculated, wherein
It is as shown in FIG. 13A in point P1 to the Pn of two-dimensional projection face (only using x and y coordinates a little).
Step 1210, current point k++, for class CijAny one point P in (i-th of harness, j-th of class)k,(Pk∈
Cij, 1 < k < n), calculate the point to straight line L distance Dk;
Step 1215, judge whether D0 is less than Dk, it is no to then follow the steps 1225 if it is thening follow the steps 1220;
Step 1220, if D0< Dk, then D is enabled0=Dk, and record point number (m=k) at this time;
Step 1225, judge whether all the points handle completion, it is no to then follow the steps 1210 if it is execution step 1230.
Step 1225, C is foundijIn point maximum from linear distance, as shown in FIG. 13A, point P maximum from linear distancem,
(1 < m < n) the i.e. inflection point of outer profile, with point P1,Pm,PnThe triangle of 3 points of compositions is as external envelope box.
Step 630, subclass is merged according to external envelope box;For any two subclass, whether deposited according to the calculating of external envelope box
Comprising or handover relationship, divide the two subclasses into same class if having, as shown in Figure 13 B, the step for iteration until
Not new categories combination event generates.The specific steps that subclass merges are as shown in figure 14, specifically include:
Step 1405, the number N of all subclasses of all harness is counted, the subclass sum of all harness is N, and creation is finally deposited
Put the array ClassNum [] of the classification number of each subclass, current class-mark C=1;
Step 1410, currently pending subclass k is set as 1, by the class-mark of subclass 1 be set as C (i.e. ClassNum [k]=
C, k=1);
Step 1415, k=k+1, m=0;
Step 1420, m=m+1;
Step 1425, subclass k and subclass m are judged with the presence or absence of handover relationship, if it does, execution step 1435, no
Then follow the steps 1430;
Step 1430, judge whether m is less than k, if so, step 1420 is executed, it is no to then follow the steps 1440;
Step 1435, the classification number of subclass k (i.e. ClassNum [k]=ClassNum identical as the classification number of subclass m is determined
[m]);
Step 1440, C=C+1, ClassNum [k]=C;
Step 1445, it indicates to merge if k is less than or equal to N and complete, execute step 1450.Otherwise it needs to continue to merge, hold
Row step 1415.
Step 1450, the point output of the identical subclass of all class-marks (ClassNum) is a subset.
For any one classification k (1≤k≤C), all subclasses are traversed, if the class-mark of the subclass is k, by this
The all the points of subclass are output in set List [k].
The above-mentioned external envelope box that the composition of the subclass based on same object on different harness is merged to subclass is mutual
The principle for including or joining, the sub-goal of different harness can be merged into final big target by not needing any given threshold value,
It avoids algorithms most in use and uses the accidentally point phenomenon of threshold decision.In the prior art, if being merged using threshold value, there are threshold values
It is small to will lead to that same target is divided into two classes but threshold value greatly and will lead to different target and be divided into of a sort defect.
Fig. 1 shows target matching method provided by the present application, specifically includes:
Step 105, according to the preamble speed of target in current time and object library and preamble time, target position is predicted
It moves;
Specifically, displacement of targets can be predicted according to the following formula:
Wherein Dx, DyIt is the displacement component in x, y-axis, Vx, VyIt is the speed in x-axis 5 and y-axis
Component, the velocity component can be obtained from object library, and T is the time occurred target last time, TnowIt is current time.
Step 110, according to the preceding sequence characteristics of target in the current signature of target and object library and the displacement of targets of prediction
It is matched.
Specifically, object matching can be carried out according to the following formula:
S=S1·S2·S3·S4·S5·S6;
It is the current signature of target,It is the preamble spy of target in object library corresponding with the current signature of target
The predictive displacement predicted characteristics after being added of sign and target.It, can be according to the center of gravity of target for targetDetermine it
Scanning angleAnd the quadrant where target is determined according to following rule:
First quartile: 45 ° of 0≤θ <, 360 ° of 315 °≤θ <
Second quadrant: 135 ° of 45 °≤θ <
Third quadrant: 225 ° of 135 °≤θ <
Fourth quadrant: 315 ° of 225 °≤θ <.
Target itself can be calculated separately with being divided into four regions up and down by two lines to intersect vertically of center of gravity
Characteristic value, four regions can be corresponding with above-mentioned quadrant angle up and down for this, i.e., horizontal x, y-axis is rotated 45 degree,
When the coordinate system of the coordinate system of target itself and vehicle itself is placed on a horizontal line, the quadrant arrangement of two coordinate systems is set to be in
Central symmetry, up and down corresponding 4 current signatures in four areas respectively with fourth quadrant, the second quadrant, first quartile, third
Quadrant is corresponding.
After determining quadrant, matched using the feature in the corresponding region of the quadrant as clarification of objective, such as really
Set the goal in the 4th quadrant, then by the subregional feature in the top of object library target and the feature of the target point cloud of Current Scan into
Row Similarity measures determine matched target in object library.
Fig. 2 shows matching flow charts provided by the present application, and wherein m is the target obtained in current image frame by classification
Number, specifically includes:
Step 205, i=0 is initialized;
Step 210, i=i+1;
Step 215, it determines whether i is greater than m, if so terminates, it is no to then follow the steps 220;
Step 220, X=0, Smax=0, Nmax=0, wherein X indicates the number of the target in object library, and Smax is indicated most
Excellent characteristic matching value, Nmax indicate the target designation in the corresponding object library of optimal characteristics;
Step 225, X=X+1;
Step 230, determine whether X is greater than N0, NO indicates the total quantity of target in object library, if it does, executing step
Rapid 210, it is no to then follow the steps 235;
Step 235, for target Ci, according to the quadrature Q of the target, matching weight W, calculate itself and target histories data
Library target existsxmin,xx,xmax,xx,yminxx,ymaxxxSimilitude S in this 6 dimensions1,S2,S3,S4,S5,S6:
Step 240, using the feature S of 6 dimensions1,S2,S3,S4,S5,S6, calculate matching characteristic S:
S=S1·S2·S3·S4·S5·S6
Step 245, target all in object library is traversed, target C is obtainediWith target (Tar each in library1,
Tar2... ..) between likeness coefficient maximum value SmaxAnd target N in library at this timemax。
If Smax> 1, then target CiClass-mark be Nmax;Otherwise target CiClass-mark be 0,0 expression current goal fail
Matching is found in object library, in the next steps, object library can be added as fresh target in the target that class-mark is 0.
Optionally, the target in picture frame can also be merged in the application, and to the target in object library into
Row is split.Fig. 3 shows target provided by the present application and merges and split schematic diagram, specifically includes:
Step 305, target C existing for present frameiAnd Cj;
Step 305, target C is determinediClass-mark NiWith target CjClass-mark NjIt is whether identical, if it's different, terminate,
It is no to then follow the steps 315;
Step 315, according to the two classification target quadratures Q, matching weight W, its characteristic similarity feature S is calculated.
Step 320, it determines whether S is greater than 1, if so executes step 325, it is no to then follow the steps 330;
Step 325, by CiWith CjAll the points be combined together;
Step 330, it divides: N=N+1 and creates target TarN。
Calculate CiWith CjCharacteristic direction L:
In above formula,Indicate CiFocus point,Indicate CjFocus point.
Step 335, determine whether L is greater than 1, if it does, step 345 is executed, it is no to then follow the steps 340.
Step 340, it splits up and down: by TarXPoint concentratePoint all clip to TarNIn,Point it is whole
Clip to TarXIn.After the completion of processing, Tar is updatedNAnd TarXCharacteristic value.IfThen CiClass-mark is X, CjClass-mark is
N;Otherwise CiClass-mark is N, CjClass-mark is X.
Step 345, left and right is split: by TarXPoint concentratePoint all clip to TarNIn,Point it is whole
Clip to TarXIn.After the completion of processing, Tar is updatedNAnd TarXCharacteristic value.IfThen CiClass-mark be Ni、CjClass
Number be N;Otherwise CiClass-mark be N, CjClass-mark be Ni。
Step 350, C is updatediAnd CjClass-mark.
It is divided into two classes since point cloud segmentation method may cause the same target mistake when threshold value is smaller, among object
Region mistake is considered transitable region, and autonomous driving vehicle is caused directly to load onto barrier and cause accident.The application passes through
The merging of class is avoided into this defect, the generation of similar problems can be prevented.Further, since point cloud segmentation method is larger in threshold value
When may cause two target mistakes and be divided into same class, vehicle front can traffic areas be taken as the middle area of big target instead
Domain and cannot pass through, cause autonomous driving vehicle to need brake deceleration, reduce the comfort of seating, under high-speed motion frequently
Slow down or parking is also not safe enough.The application avoids this defect by the fractionation of class, allows the vehicle under certain speed
Stablize and moves ahead.
Optionally, the application also updates the speed of target in object library, carries out the mesh in database based on ICP matching algorithm
The matching for the target that mark and present frame are matched to, is calculated the displacement on X and Y-direction, divided by (present frame moment time difference
Subtract target last time at the time of occur) speed is obtained, it specifically includes:
(1) it initializes, i=0;
(2) i=i+1.Processing terminate if i > m.If CiClass-mark be 0, repeat this step.Otherwise enter (3) step;
(3) using iteration with regard near point (Iterative Closest Point, ICP) matching method, to CiAnd TarNi(NiFor Ci
The class-mark of successful match, Tar in object libraryNiIndicate object library in CiThe target of successful match) in point set carry out
Match, obtains the displacement relation < Δ x of point seti,Δyi>.
(4) Tar is calculatedNiSpeed:
TnowFor current time, T is target TarNiThe time that last time occurs, which can
To be obtained from object library.
The application calculates the displacement relation between the object point set between two frames using ICP matching method, counts more accurately
Calculate speed.The displacement that ICP has used the form of point set to be matched and obtained, precision are higher than common key point displacement calculating side
Method.The application can be precisely calculated speed, avoid speedometer and do not calculate accurately caused incorrect decision, can be realized automatically with
Vehicle reduces vehicle ride experience.
Optionally, the application is also updated target database and is filtered to target velocity, detailed process such as Fig. 4
It is shown:
Step 405, any one target C in current image frame is handledi;
Step 410, operation initialization is updated, determines target CiClass-mark whether be equal to 0, if so execute step
420, it is no to then follow the steps 415;
Step 415, the target in object library is updated.For the target C in present frameiIf CiClass-mark be not 0 (at this time
CiClass-mark is Ni), then use CiPoint cloud and characteristic value replace TarNi, speed < vx,i,vy,iTar is added in >NiSpeed queue, when
Between be present frame time;
Step 420, a fresh target is created into object library.If CiClass-mark be 0, then N=N+1, create target
TarN;
Step 425, the fresh target assignment obtained for step 420.By CiPoint cloud and characteristic value assign TarN, TarNSpeed
It is set as < 0,0 >, TarNTarget time of occurrence is set as the time of present frame;
Step 430, it determines whether all classes handle completion, if it does, executing step 435, otherwise handles present frame
Other classes;
Step 435, any one target j in object library is determined;
Step 440, j-th of target Tar in object library is obtainedjTime Tj, calculate time difference Δ T: the Δ T with present frame
=Tnow-Tj;
Step 445, judge whether △ T is greater than 1.5s, if it does, step 450 is executed, it is no to then follow the steps 455;
Step 450, it is believed that the target no longer occurs, and deletes Tar (j);
Step 455, confirm that all target processing are completed, if it does, executing step 460, no to select, processing target library
In other targets;
Step 460, it is filtered using speed of the Savitzky-Golay to target each in object library.
The application filters target velocity with Savitzky-Golay filtering method, and obtained speed, which will not generate, obviously prolongs
Late, the lasting tracking ability to the biggish target of acceleration change is improved, the anticipation precision of target position is improved, therefore is mentioned
High safety, so that vehicle is without installing radar sensor additional, to reduce system configuration cost.
By above-mentioned process can be realized the segmentation of a cloud with merge, reduce the operand of points cloud processing, improve data
The real-time of processing.
Figure 15 shows object matching provided by the present application and the flow chart that tests the speed, and specifically includes:
Step 1505, laser radar data is received;
Step 1510, cloud cluster segmentation is put;
Step 1515, geometrical characteristic is extracted;
Step 1520, it is obtained from vehicle motion state, specifically includes speed, direction;
Step 1525, according to the target velocity and last time time of occurrence and current time in object library, target position is predicted
It moves;
Step 1530, according to the displacement of targets of the geometrical characteristic of target and prediction in the geometrical characteristic of extraction and object library
It is matched;
Step 1535, target is merged or is split;
Step 1540, object library is updated;
Step 1545, displacement of targets amount and speed are calculated, such as uses ICP matching algorithm;
Step 1550, Savitzky-Golay filtering is carried out to target velocity, target velocity can update in object library.
Target is included in a database by the application, and the target of a new frame is compared according to the target in database
Identification, has successfully managed target the case where certain frames go out active.Sensor possibly can not scan mesh in extreme circumstances
Mark, the method for being included in target database, which ensure that, not to be occurred temporarily even if target in present frame, and autonomous driving vehicle still is able to
Corresponding position is prejudged there are barrier (target), avoids the generation of accident.When the application can occur again after target loss
Identify that the target occurred again is identical as target before, the movement properties information before target will not lose, and enhance and be
The ability to predict that system changes dbjective state improves the stability of system operation.It is swept caused by being blocked for target identification
Non-continuous event is retouched, is clustered again by matching, target is prevented to be divided into two classes by mistake
Wherein, step 1515, geometrical characteristic can be extracted by following below scheme:
Target vehicle for rectangle is shown in Figure 16 to overtake other vehicles from rear to from each stage scanning result signal in front side
Figure.In Figure 16, light box indicates that vehicle, stain indicate laser scanning point.In stage 11 (target is in dead astern) Shi Jiguang thunder
Up to can only scan target headstock, at stage 12 (target is in left rear side) laser radar can scan target headstock and right side,
At stage 13 (target is in left side), laser radar be can only scan on the right side of target carriage, in stage 14 (target is in left front) Shi Ji
Optical radar can scan that the target tailstock and right side, laser radar can scan target carriage at stage 15 (target is in front)
Tail.
As can be seen from Figure 16, there are great differences for scanning result of the target in different orientation.Therefore, using central point
Or target length/width tracking/speed-measuring method will bring biggish error.The application uses the maximum of horizontal axis, the longitudinal axis
The center of gravity of value, minimum value and average value and target characterizes the geometrical characteristic of target.
The first step can pass through the maximum value and minimum value x of point its x of cloud computing and y-axis of each classificationmax,xmin,ymax,
ymin;
Second step calculates the center of gravity of each cloud, i.e., the center of gravity of each target according to the point cloud of each classification
In formula, n is the number at class midpoint, < xi,yiThe coordinate that > is i-th point in such.After each segmentation
Class calculates center of gravity according to above formula.
Target is divided into this four parts " up and down ", and calculates the characteristic value of each part by third step
xmin,xx,xmax,xx,yminxx,ymaxxx, four parts calculate as follows respectively:
Region 1 (left-hand component) is less than x with x valuemin+ 0.5 calculates, i.e., for such any one point i, if xi<
xmin+ 0.5, then count average value, the maximin of x and yxmin,1,xmax,1,ymin,1,ymax,1;
Region 2 (right-hand component) is greater than x with x valuemax- 0.5 calculates, i.e., for such any one point i, if xi>
xmax- 0.5, then count average value, the maximin of x and yxmin,2,xmax,2,ymin,2,ymax,2;
Region 3 (upper rim portion) is less than y with x valuemin+ 0.5 calculates, i.e., for such any one point i, if yi<
ymin+ 0.5, then count average value, the maximin of x and yxmin,3,xmax,3,ymin,3,ymax,3;
Region 4 (lower portion) is less than y with x valuemax- 0.5 calculates, i.e., for such any one point, if yi>
ymax- 0.5, then count average value, the maximin of x and yxmin,4,xmax,4,ymin,4,ymax,4;
Further, it is also possible to define, headstock direction is 0 direction, (i.e. directly to the right is by the scanning direction θ that is positive clockwise
90 °, 180 ° of dead astern, 270 ° of front-left, 0 °/360 ° of front).Each target is calculated according to the coordinate of target barycentric point
Scanning direction:
As shown in figure 17, this application provides a kind of laser radar point cloud sorters, including processor 1705 and storage
Equipment 1710.It is stored with program in storage equipment, when the program processor is performed, laser provided by the present application may be implemented
Radar point cloud classifications method.
Correspondingly, this application provides a kind of object matching device, structure can refer to Figure 17, when in storage equipment
When program is executed by processor, target matching method provided by the present application may be implemented.Can optionally, which can
To include prediction module, for according to target in current time and object library preamble speed and the preamble time, predict target
Displacement;Matching module, for the preceding sequence characteristics of target in the current signature and object library according to target and the target position of prediction
Shift-in row matching.Optionally, the device further include include categorization module, for according to the distances of consecutive points and consecutive points away from
From threshold value to point cloud classifications.Optionally, which further includes merging module, for determining sorted point in two-dimensional projection face
External envelope box;The class of external envelope box intersection is merged.Matching module is mainly used for calculating target and target in present frame
The similitude of target in library, so that it is determined that the target in object library corresponding with target in present frame, passes through the choosing of geometrical characteristic
It selects, improves matched precision.In addition, matching module can also realize the merging and fractionation of target, target speed can also be calculated
Degree, and Savitzky-Golay filtering is carried out to speed.
Correspondingly, this application provides a kind of vehicles, may include laser radar point cloud target provided by the present application point
Device and/or object matching device are cut, is driven for realizing the auxiliary of vehicle.For example, the control system of vehicle utilizes the application
The laser radar Target Segmentation device of offer realizes the Target Segmentation of vehicle periphery, and to emerging target and raw data base
In target matched, to realize the tracking of target and test the speed, such as application of automatic follow the bus.
It should be understood by those skilled in the art that, embodiments herein can provide as method, apparatus or computer program
Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the application
Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the application, which can be used in one or more,
The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces
The form of product.
Several specific embodiments of the application have shown and described in above description, but as previously described, it should be understood that the application
Be not limited to forms disclosed herein, should not be regarded as an exclusion of other examples, and can be used for various other combinations,
Modification and environment, and can be changed in the application contemplated scope by the above teachings or related fields of technology or knowledge
It is dynamic.And changes and modifications made by those skilled in the art do not depart from spirit and scope, then it all should be appended by the application
In scope of protection of the claims.
Claims (10)
1. a kind of multi-line laser radar Target Segmentation method characterized by comprising
Classify to the point cloud of multi-line laser radar scanning Target Acquisition, obtains the subclass of same target point cloud;
The point cloud of the subclass is projected on two-dimensional surface;
Obtain the external envelope box of each subclass;
The subclass that the external envelope box intersects is merged into a class.
2. the method according to claim 1, wherein the external envelope box for obtaining each subclass includes:
It determines first point and the last one point for belonging to each subclass, first point and the last one point is connected as line
Section;
It determines in the remaining point of each subclass with the line segment apart from farthest point;
Using the triangle constituted with the line segment apart from farthest point and first point and the last one described point as often
The external envelope box of a class.
3. method according to claim 1 or 2, which is characterized in that this method further include:
Calculate the distance between two neighboring point in point cloud;
In the case where the distance meets preset condition, the two neighboring point is divided into same subclass.
4. according to the method described in claim 3, it is characterized in that, the preset condition between the two neighboring point away from
From being less than or equal to 0.1+DT meter, wherein D be in the two neighboring point between the scanned point of elder generation and laser radar away from
From T is preset coefficient.
5. a kind of target matching method characterized by comprising
According to the preamble speed of target in current time and object library and preamble time, displacement of targets is predicted;
It is matched according to the current signature of target with the displacement of targets of the preceding sequence characteristics of target in object library and prediction;
Wherein, the preceding sequence characteristics of target are any one according to claim 1-4 in the current signature and/or object library of the target
The geometrical characteristic for the class that method described in obtains.
6. according to the method described in claim 5, it is characterized in that, target in the current signature and object library according to target
Preceding sequence characteristics and prediction displacement of targets carry out matching include:
The first matching characteristic of target is calculated according to the current signature of target;
The second matching characteristic of target is calculated according to the preceding sequence characteristics of target in object library and the displacement of targets of prediction;
Matched target in object library is determined according to first matching characteristic and second matching characteristic;Or
This method further include:
The target in object library is merged or decoupled according to the second matching characteristic of the first matching characteristic of target and target.
7. a kind of multi-line laser radar point cloud Target Segmentation device characterized by comprising
Equipment is stored, for storing program;
Processor, for executing described program to realize method described in claim 1-4 any one.
8. a kind of object matching device characterized by comprising
Equipment is stored, for storing program;
Processor realizes method as claimed in any one of claims 5 to 6 for executing described program.
9. a kind of storage equipment, is stored thereon with program, which is characterized in that described program is for realizing power when being executed by processor
Benefit requires method described in 1-4 any one and/or method as claimed in any one of claims 5 to 6.
10. a kind of vehicle, which is characterized in that including device according to claim 7 and/or according to claim 8
Device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810116188.4A CN110119751B (en) | 2018-02-06 | 2018-02-06 | Laser radar point cloud target segmentation method, target matching method, device and vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810116188.4A CN110119751B (en) | 2018-02-06 | 2018-02-06 | Laser radar point cloud target segmentation method, target matching method, device and vehicle |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110119751A true CN110119751A (en) | 2019-08-13 |
CN110119751B CN110119751B (en) | 2021-07-20 |
Family
ID=67519935
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810116188.4A Active CN110119751B (en) | 2018-02-06 | 2018-02-06 | Laser radar point cloud target segmentation method, target matching method, device and vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110119751B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110766719A (en) * | 2019-09-21 | 2020-02-07 | 北醒(北京)光子科技有限公司 | Target tracking method, device and storage medium |
CN113689471A (en) * | 2021-09-09 | 2021-11-23 | 中国联合网络通信集团有限公司 | Target tracking method and device, computer equipment and storage medium |
CN113850995A (en) * | 2021-09-14 | 2021-12-28 | 华设设计集团股份有限公司 | Event detection method, device and system based on tunnel radar vision data fusion |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140218353A1 (en) * | 2013-02-01 | 2014-08-07 | Apple Inc. | Image group processing and visualization |
CN104463871A (en) * | 2014-12-10 | 2015-03-25 | 武汉大学 | Streetscape facet extraction and optimization method based on vehicle-mounted LiDAR point cloud data |
CN105957076A (en) * | 2016-04-27 | 2016-09-21 | 武汉大学 | Clustering based point cloud segmentation method and system |
CN106778749A (en) * | 2017-01-11 | 2017-05-31 | 哈尔滨工业大学 | Based on the touring operating area boundary extraction method that concentration class and Delaunay triangles are reconstructed |
CN107025323A (en) * | 2016-12-29 | 2017-08-08 | 南京南瑞信息通信科技有限公司 | A kind of transformer station's fast modeling method based on ATL |
-
2018
- 2018-02-06 CN CN201810116188.4A patent/CN110119751B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140218353A1 (en) * | 2013-02-01 | 2014-08-07 | Apple Inc. | Image group processing and visualization |
CN104463871A (en) * | 2014-12-10 | 2015-03-25 | 武汉大学 | Streetscape facet extraction and optimization method based on vehicle-mounted LiDAR point cloud data |
CN105957076A (en) * | 2016-04-27 | 2016-09-21 | 武汉大学 | Clustering based point cloud segmentation method and system |
CN107025323A (en) * | 2016-12-29 | 2017-08-08 | 南京南瑞信息通信科技有限公司 | A kind of transformer station's fast modeling method based on ATL |
CN106778749A (en) * | 2017-01-11 | 2017-05-31 | 哈尔滨工业大学 | Based on the touring operating area boundary extraction method that concentration class and Delaunay triangles are reconstructed |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110766719A (en) * | 2019-09-21 | 2020-02-07 | 北醒(北京)光子科技有限公司 | Target tracking method, device and storage medium |
CN113689471A (en) * | 2021-09-09 | 2021-11-23 | 中国联合网络通信集团有限公司 | Target tracking method and device, computer equipment and storage medium |
CN113689471B (en) * | 2021-09-09 | 2023-08-18 | 中国联合网络通信集团有限公司 | Target tracking method, device, computer equipment and storage medium |
CN113850995A (en) * | 2021-09-14 | 2021-12-28 | 华设设计集团股份有限公司 | Event detection method, device and system based on tunnel radar vision data fusion |
Also Published As
Publication number | Publication date |
---|---|
CN110119751B (en) | 2021-07-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110647835A (en) | Target detection and classification method and system based on 3D point cloud data | |
CN107192994A (en) | Multi-line laser radar mass cloud data is quickly effectively extracted and vehicle, lane line characteristic recognition method | |
CN114488194A (en) | Method for detecting and identifying targets under structured road of intelligent driving vehicle | |
CN110674705A (en) | Small-sized obstacle detection method and device based on multi-line laser radar | |
US8686892B2 (en) | Synthetic aperture radar chip level cross-range streak detector | |
CN110119751A (en) | Laser radar point cloud Target Segmentation method, target matching method, device and vehicle | |
Burger et al. | Fast multi-pass 3D point segmentation based on a structured mesh graph for ground vehicles | |
CN113008296A (en) | Method and vehicle control unit for detecting a vehicle environment by fusing sensor data on a point cloud plane | |
CN116524219A (en) | Barrier detection method based on laser radar point cloud clustering | |
CN115523935A (en) | Point cloud ground detection method and device, vehicle and storage medium | |
US20220171975A1 (en) | Method for Determining a Semantic Free Space | |
US20220363288A1 (en) | Method and Apparatus for Tracking Object Using Lidar Sensor and Recording Medium Storing Program to Execute the Method | |
CN117788735A (en) | Dynamic point cloud removing method based on grid division | |
CN117288177A (en) | Laser SLAM method for solving dynamic ghost | |
CN116206286A (en) | Obstacle detection method, device, equipment and medium under high-speed road condition | |
CN112286178B (en) | Identification system, vehicle control system, identification method, and storage medium | |
CN116263504A (en) | Vehicle identification method, device, electronic equipment and computer readable storage medium | |
KR102275671B1 (en) | Object contour detection apparatus and method | |
CN115908541A (en) | Curvature information-based vehicle-mounted laser radar point cloud clustering algorithm for indoor environment | |
CN115421160A (en) | Road edge detection method, device, equipment, vehicle and storage medium | |
CN116863325A (en) | Method for multiple target detection and related product | |
Cheng et al. | Vanishing point and Gabor feature based multi-resolution on-road vehicle detection | |
Fan et al. | Three-dimensional real-time object perception based on a 16-beam lidar for an autonomous driving car | |
US12020464B2 (en) | Method of determining an orientation of an object and a method and apparatus for tracking an object | |
CN114913331B (en) | Target detection method and device based on point cloud data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |