Specific implementation mode
Many details are elaborated in the following description in order to fully understand the application.But the application can be with
Much implement different from other manner described here, those skilled in the art can be without prejudice to the application intension the case where
Under do similar popularization, therefore the application is not limited by following public specific implementation.
This application provides a kind of road object identifying methods and a kind of road object recognition equipment, combine in turn below attached
Embodiments herein is described in detail in figure.
Referring to FIG. 1, it is a kind of flow chart of road object identifying method embodiment provided by the present application, the road
Object identifying method includes the following steps:
Step S101:According to the reference line of preset section length and road, obtain for the point cloud number to the road
According to the segmentation position range being segmented.
Step S102:The point in the point cloud data of the road, fallen into the same segmentation position range is assigned to same
In a point set.
Step S103:For each point set, according to the reference line of the position coordinates at point set midpoint and the road
Direction obtains the rectangle that all the points in the point set are covered along the direction of the reference line.
Step S104:For the adjacent corresponding rectangle of two segmentation position ranges, one of them corresponding rectangle is judged
Rectangle corresponding with another whether there is connection relation.
Step S105:It is to belong to same road object there will be the rectangle marked of connection relation.
Hereafter above-mentioned steps S101~step S105 is described in detail.
First, to step S101:According to the reference line of preset section length and road, obtain for the road
The segmentation position range that point cloud data is segmented, is described.
Wherein, the reference line of the road includes but not limited to the rail that the road is generated when acquiring the point cloud data of road
The lane line etc. of trace or road itself.The point cloud data of the road, which can refer to, is scanned road based on existing
Technology and acquire include the point of attribute informations such as position, reflectance value set, set of these points usually with corresponding road
Road object (such as word marking, pictorial symbolization etc. on road) is corresponding.
In step S101, for example, certain road is an ideal straight line one way, including adjacent two parallel tracks
Line (such as indicating this two lane lines respectively with R1 and R2), the reference line of road are the lane line of road itself, the long L of road,
Preset section length is L/10, then can be using the endpoint of the one end lane line R1 as starting point, and the endpoint of the R1 other ends is as eventually
Per the length of spacing L/10, one from this line segment of origin-to-destination (in addition to the start and the end points only) is taken in this line segment for point
A point is as waypoint, then using each waypoint as reference point, makees vertical line to lane line R2, the vertical line and lane line R2 phases
It hands over, so as to take each waypoint and using the waypoint as the intersection point institute for corresponding to vertical line and lane line R2 made by reference point
The line segment of connection, for being segmented to this straight line one way point cloud data, to obtain using the line segment as cut-off rule
Adjacent segmentation position range.
According to one embodiment of the application, please refers to Fig.2 and Fig. 3, the reference line of the road are the point of acquisition road
The path line of the road is generated when cloud data, then the step S101 includes:
Step S201:The generated time sequence for the tracing point that path line according to road includes, sequence are obtained for institute
The target trajectory point that the point cloud data of road is segmented is stated, the distance between continuous two target trajectory points are equal to described default
Section length.
Specifically, the path line of the road includes but not limited to for acquiring the sampling instrument of road point cloud data along road
The path line that the sampling instrument moves in the cloud data procedures of road collection point, usual sampling instrument is sequentially in time and along road
The extending direction tracing point on acquisition trajectories line successively, such as every five seconds for example clock or acquire a tracing point every 50 meters etc., then may be used
The tracing point of the acquisition as target trajectory point, can also be often separated by predetermined quantity according to the generated time of tracing point
Tracing point takes a tracing point as target trajectory point.
By taking Fig. 3 as an example, a tracing point is acquired every 50 meters, and using the tracing point as target trajectory point, such as by rail
Mark point A, B and C ... are as target trajectory point, wherein between two adjacent target trajectory points A and B and between B and C
Distance is equal to preset section length.
Step S202:Continuous two target trajectory points are determined as to the endpoint of a segmentation position range.For example, by Fig. 3
Shown in target trajectory point A and B be determined as the endpoint of a segmentation position range (be labeled as range 1), by target trajectory point B and
Target trajectory point C is determined as the endpoint of another segmentation position range (being labeled as range 2).
Step S203:The endpoint overlapped using two adjacent segmentation position ranges does vertical line, the vertical line is vertical as intersection point
In the orbit segment that two adjacent segmentation position ranges are constituted.For example, being by the terminal B that above range 1 and range 2 overlap
Intersection point makees vertical line, the orbit segment (AC) which is constituted perpendicular to two segmentation position ranges.
Step S204:The vertical line is determined as to the shared boundary of the segmentation position range.For example, by shown in Fig. 3
Shared boundary of the vertical line as segmentation position range 1 and range 2.
It according to another embodiment of the application, please refers to Fig.4 and Fig. 5, the reference line of the road is the road
Lane line, then the step S101 include:
Step S301:Along the direction of the lane line of road, from the shape point of the lane line, sequence is obtained for institute
The target shape point that the point cloud data of road is segmented is stated, the distance between continuous two target shapes point is equal to described default
Section length.
Specifically, the lane line of road can be straight line, can also be curve, the shape point example of the lane line
Such as include lane line on every preset distance position taken point.The target shape point can be the shape every predetermined quantity
The point that dot sequency is taken, such as from the original shape point of top lane line shown in fig. 5, successively for example to shape point number
For 1,2 ... N then takes next shape point as target shape point, such as first aim shape point then every 3 shape points
For original shape point 1, second target shape point is shape point 5, and third target shape point is shape point 9, and so on.
Step S302:Normal is done to another lane line of the road using each target shape point as intersection point, by normal
The associated shape point of the target shape point is determined with the intersection point of another lane line.
Wherein, described to be perpendicular to target shape point place vehicle by normal made by intersection point of each target shape point
One line of diatom.
Step S303:Continuous two target trajectory points and its corresponding shape point together are determined as a segmentation position model
The endpoint enclosed.For example, as shown in figure 5, target shape point q1, q2 and together shape point q4, q5 are determined as a segmentation position
The endpoint of range;Target shape point q2, q3 and together shape point q5, q6 are determined as to the endpoint of another segmentation position range.
Step S304:The line segment that the endpoint overlapped with two adjacent segmentation position ranges is constituted is determined as the fragment bit
Set the shared boundary of range.For example, the line constituted with endpoint q2 and q5 that two segmentation position ranges adjacent shown in Fig. 5 overlap
Section is determined as the shared boundary of the two segmentation position ranges.
Next, to step S102:In the point cloud data of the road, point same segmentation position range in will be fallen into
It assigns in the same point set.By taking Fig. 3 as an example, the point in range 1 will be fallen into point cloud data and is assigned in a point set, will be fallen
The point entered in range 2 is assigned in another point set.
Next, to step S103:For each point set, according to the position coordinates at point set midpoint and the road
The direction of reference line obtains the rectangle for covering all the points in the point set along the direction of the reference line, is described.
By taking Fig. 6 as an example, the reference line of the road is, for example, the rail that the road is generated when acquiring the point cloud data of road
Trace is based on above-mentioned steps S101 and S102, obtains another point set in a point set and the range 2 in range 1.Into
One step, according to the extending direction of the position distribution at the two point set midpoints and the path line of road, obtain along path line
Direction is covered each by the rectangle 1 and rectangle 2 of all the points in described two point sets.
Next, to step S104:For the adjacent corresponding rectangle of two segmentation position ranges, judge that one of them is right
The rectangle answered rectangle corresponding with another whether there is connection relation, be described.
According to one embodiment of the application, referring to FIG. 7, the step S104 includes:
Step S401:For the adjacent corresponding rectangle of two segmentation position ranges, the vertex of each rectangle is obtained described in
The distance on the shared boundary of two adjacent segmentation position ranges is as target range.With 1 He of segmentation position range shown in fig. 6
For 2 corresponding rectangle 1 of range and rectangle 2, rectangle 1 is obtained to the distance of the shared boundary BE of the two rectangles as mesh
Subject distance.
Step S402:From the corresponding target range of two segmentation position ranges, a shortest distance is obtained respectively.Example
Such as, the rectangle 1 and rectangle 2 obtained respectively in Fig. 6 arrives the shortest distance for sharing boundary DE, it is assumed that each one in rectangle 1 and rectangle 2
With shared boundary B E on same straight line, then it is all 0 that rectangle 1 and rectangle 2, which arrive and share the shortest distance on boundary, for side.
Step S403:Judge whether two shortest distances are respectively less than preset distance threshold, if so, two shortest distances
There are connection relations for corresponding two rectangles.
Specifically, it is assumed that distance threshold be a certain numerical value Y (positive number), then the most short rectangle of two in Fig. 6 due to all be 0,
Respectively less than numerical value Y, then there are connection relations for rectangle 1 and rectangle 2.
Finally, to step S105:It is to belong to same road object there will be the rectangle marked of connection relation, is described.
Then it is to belong to same road object by the two rectangle markeds since there are connection relations for rectangle 1 and rectangle 2 by taking Fig. 6 as an example.
According to one embodiment of the application, with continued reference to FIG. 1, the method further includes:
Step S106:The rectangle for belonging to the same road object is merged, the road object corresponding one is obtained
A target rectangle, the target rectangle cover the corresponding all rectangles of the same road object.
Specifically, referring to FIG. 8, by merging all rectangles for belonging to same road object, the road is obtained
The corresponding target rectangle of road object, the target rectangle cover the corresponding all rectangles of the same road object,
Namely cover point cloud corresponding to complete road object.
According to one embodiment of the application, rectangle to be combined can be merged two-by-two, finally be merged into one
Target rectangle.If two rectangles to be combined is of different size, referring to FIG. 9, Fig. 9 shows two squares of different size
Shape, then by two with the connection relation rectangle for belonging to same road object merge including:
The linear equation of the long side of the big rectangle of width in the rectangle to be combined is obtained, the long side of the rectangle is
Two sides parallel with the reference line direction of road, the linear equation are denoted as first straight line equation and second straight line side respectively
Journey;
The straight trip equation for obtaining a broadside of the small rectangle of width in the rectangle to be combined, is denoted as third straight line
Equation, the broadside are perpendicular to the side in the reference line direction of road and arrive one of the shared boundary of segmentation position range farther out
Side;
As previously mentioned, the segmentation position range residing for the rectangle with connection relation, which can exist, shares boundary.
The intersection point and third linear equation and second straight line of the acquisition third linear equation and first straight line equation
The intersection point of equation;
Distance to arrive the two intersection points in four vertex of described two intersection points and the big rectangle of the width is remote
Four vertex of two vertex as the rectangle after merging, to realize the merging to the rectangle with connection relation.Such as Fig. 9
Shown, A, B, C, D are the vertex of rectangle after merging in figure.
The application compared to the prior art, is segmented a cloud using on the basis of the reference line of road, can effectively keep away
Exempt from the loss of data for being segmented position, operation result is avoided to be distorted;Further, the application owns by obtaining to cover in each segmentation
The rectangle of point, obtains the rectangle for belonging to same road object according to the position relationship of rectangle, effectively identifies and covered in the rectangle
The corresponding complete road object of all the points of lid, avoids the problem that discrimination reduces caused by segmentation.In addition, the application is carried
The road object identifying method of confession need not manually participate in that automatic running can be realized, and reduce cost of labor, have stronger reality
The property used.
In the above-described embodiment, a kind of road object identifying method is provided, corresponding, the application also provides
A kind of road object recognition equipment.Referring to FIG. 10, it is a kind of road object recognition equipment embodiment provided by the invention
Schematic diagram.Since the method flow that device embodiment executes is substantially similar to embodiment of the method, device described below is real
Apply that example is only schematical, related place referring to embodiment of the method explanation.
A kind of road object recognition equipment provided in this embodiment, including:
It is segmented position range acquiring unit 101, for the reference line according to preset section length and road, is used for
The segmentation position range that the point cloud data of the road is segmented;
Allocation unit 102, for by the point cloud data of the road, fall into it is same segmentation position range in point minute
Into the same point set;
Rectangle acquiring unit 103, for being directed to each point set, according to the position coordinates at point set midpoint and the road
Reference line direction, obtain the rectangle that all the points in the point set are covered along the direction of the reference line;
Judging unit 104, for for the adjacent corresponding rectangle of two segmentation position ranges, judging one of corresponding
Rectangle rectangle corresponding with another whether there is connection relation;
Marking unit 105, for being to belong to same road object there will be the rectangle marked of connection relation.
According to one embodiment of the application, referring to FIG. 10, described device further comprises:
Combining unit 106 obtains the road object pair for merging the rectangle for belonging to the same road object
The target rectangle answered, the target rectangle cover the corresponding all rectangles of the same road object.
According to one embodiment of the application, the judging unit 104 is specifically used for:
For the corresponding rectangle of adjacent two segmentations position ranges, the vertex of each rectangle is obtained to described adjacent two
The distance on the shared boundary of a segmentation position range is as target range;
From the corresponding target range of two segmentation position ranges, a shortest distance is obtained respectively;
Judge whether two shortest distances are respectively less than preset distance threshold, if so, two shortest distances corresponding two
There are connection relations for a rectangle.
According to one embodiment of the application, when the reference line of the road is the point cloud data of acquisition road described in generation
The path line of road is then segmented position range acquiring unit 101, is specifically used for:
The generated time sequence for the tracing point that path line according to road includes, sequence are obtained for the road
The target trajectory point that point cloud data is segmented, the distance between continuous two target trajectory points are equal to the preset sector boss
Degree;
Continuous two target trajectory points are determined as to the endpoint of a segmentation position range;
The endpoint overlapped using two adjacent segmentation position ranges does vertical line, the vertical line is perpendicular to the phase as intersection point
The orbit segment that two adjacent segmentation position ranges are constituted;
The vertical line is determined as to the shared boundary of the segmentation position range.
According to one embodiment of the application, the reference line of the road is the lane line of the road, is segmented position model
Acquiring unit 101 is enclosed, is specifically used for:
Along the direction of the lane line of road, from the shape point of the lane line, sequence is obtained for the road
The target shape point that point cloud data is segmented, the distance between continuous two target shapes point are equal to the preset sector boss
Degree;
Normal is done to another lane line of the road using each target shape point as intersection point, by normal and another
The intersection point of lane line determines the associated shape point of the target shape point;
Continuous two target trajectory points and its corresponding shape point together are determined as to the end of a segmentation position range
Point;
The line segment that the endpoint overlapped with two adjacent segmentation position ranges is constituted is determined as the segmentation position range
Share boundary.
Although the application is disclosed as above with preferred embodiment, it is not for limiting the application, any this field skill
Art personnel are not departing from spirit and scope, can make possible variation and modification, therefore the guarantor of the application
Shield range should be subject to the range that the application claim defined.
In a typical configuration, computing device includes one or more processors (CPU), input/output interface, net
Network interface and memory.
Memory may include computer-readable medium in volatile memory, random access memory (RAM) and/or
The forms such as Nonvolatile memory, such as read-only memory (ROM) or flash memory (flash RAM).Memory is computer-readable medium
Example.
1, computer-readable medium can be by any side including permanent and non-permanent, removable and non-removable media
Method or technology realize information storage.Information can be computer-readable instruction, data structure, the module of program or other numbers
According to.The example of the storage medium of computer includes, but are not limited to phase transition internal memory (PRAM), static RAM
(SRAM), dynamic random access memory (DRAM), other kinds of random access memory (RAM), read-only memory
(ROM), electrically erasable programmable read-only memory (EEPROM), fast flash memory bank or other memory techniques, CD-ROM are read-only
Memory (CD-ROM), digital versatile disc (DVD) or other optical storages, magnetic tape cassette, tape magnetic disk storage or
Other magnetic storage apparatus or any other non-transmission medium can be used for storage and can be accessed by a computing device information.According to
Herein defines, and computer-readable medium does not include non-temporary computer readable media (transitory media), is such as modulated
Data-signal and carrier wave.
2, it will be understood by those skilled in the art that embodiments herein can be provided as method, system or computer program production
Product.Therefore, complete hardware embodiment, complete software embodiment or embodiment combining software and hardware aspects can be used in the application
Form.It can be used in the computer that one or more wherein includes computer usable program code moreover, the application can be used
The computer program product implemented on storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.)
Form.