CN108629228A - A kind of road object identifying method and device - Google Patents

A kind of road object identifying method and device Download PDF

Info

Publication number
CN108629228A
CN108629228A CN201710154821.4A CN201710154821A CN108629228A CN 108629228 A CN108629228 A CN 108629228A CN 201710154821 A CN201710154821 A CN 201710154821A CN 108629228 A CN108629228 A CN 108629228A
Authority
CN
China
Prior art keywords
road
point
rectangle
target
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710154821.4A
Other languages
Chinese (zh)
Other versions
CN108629228B (en
Inventor
陈岳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Autonavi Software Co Ltd
Original Assignee
Autonavi Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Autonavi Software Co Ltd filed Critical Autonavi Software Co Ltd
Priority to CN201710154821.4A priority Critical patent/CN108629228B/en
Publication of CN108629228A publication Critical patent/CN108629228A/en
Application granted granted Critical
Publication of CN108629228B publication Critical patent/CN108629228B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

This application provides a kind of road object identifying methods and device, recognition methods to include:According to the reference line of preset section length and road, the segmentation position range being segmented for the point cloud data to the road is obtained;The point in the point cloud data of the road, fallen into the same segmentation position range is assigned in the same point set;According to the direction of the position coordinates at each point set midpoint and the reference line of the road, the rectangle of all the points along the direction of reference line covering point set is obtained;Judge that the adjacent corresponding rectangle of two segmentation position ranges whether there is connection relation;It is to belong to same road object there will be the rectangle marked of connection relation.The application is segmented the point cloud data of road based on road reference line, obtain the rectangle for covering all the points in each segmentation, the rectangle for belonging to same road object is obtained according to the position relationship of rectangle, effectively identifies the corresponding complete road object of all the points covered in the rectangle.

Description

A kind of road object identifying method and device
Technical field
This application involves data processing field, it is specifically related to a kind of road object identifying method and device.
Background technology
In digital map navigation, automatic Pilot is a critical technology, and the key of automatic Pilot is being capable of high-precision Ground identifies the road environment of vehicle-surroundings, to keep automatic Pilot safe and reliable.
The vehicle-mounted mobile laser measurement system of laser scanner technique, especially latest developments quickly, is accurately obtained due to it The ability of Three Dimensional Ground spatial information, is increasingly paid much attention to by people.The data of Vehicle-borne Laser Scanning system acquisition are With the point cloud data for acquiring the characteristics such as dot density is high, data volume is big, how road is accurately and rapidly extracted from point cloud data The roads object such as landform, atural object, figure, word have become problem in the urgent need to address in point cloud data practical application.
Although the method for having some to carry out road Object identifying for the larger point cloud of data volume at present, is normally based on The data volume of the point cloud acquired and the operational capability of computer, are divided into more one's share of expenses for a joint undertaking point clouds, to divide by the point cloud acquired It is secondary that more one's share of expenses for a joint undertaking point clouds are identified.For example, the data volume of the point cloud acquired is 10G, and the single operation energy of computer Power is 2G, then a cloud is divided into 5 one's share of expenses for a joint undertaking point clouds is identified.During studying the prior art, inventor has found existing Recognition methods to the division of cloud is carried out according to data volume, has cut off the road object corresponding to some clouds to a certain extent Between actual geographic contact, to influence accuracy that the corresponding road object of a cloud is identified.
Invention content
In view of the above problems, the application provides a kind of road object identifying method and a kind of corresponding road Object identifying dress It sets, the point cloud data of road is segmented based on road reference line, and cover the square of all the points in each segmentation by obtaining Shape whether there is connection relation according to the adjacent corresponding rectangle of segmentation, to obtain belonging to the rectangle of the same road object, from And in the case where having determined the corresponding point cloud data of point set of rectangle covering, it can more accurately identify road object, it is real On border, the geographic contact between acquired point cloud data is utilized to obtain belonging to same road object in the embodiment of the present application Rectangle, in the case where determining rectangular shaped rim and then determining the point cloud data in the rectangular shaped rim, be carried out to road object More accurately identify.
According to one embodiment of the application, a kind of road object identifying method is provided, including:According to preset segmentation The reference line of length and road obtains the segmentation position range being segmented for the point cloud data to the road;It will be described In the point cloud data of road, the point fallen into the same segmentation position range is assigned in the same point set;For each point set It closes, according to the direction of the position coordinates at point set midpoint and the reference line of the road, obtains covering along the direction of the reference line Cover the rectangle of all the points in the point set;For the adjacent corresponding rectangle of two segmentation position ranges, one of them is judged Corresponding rectangle rectangle corresponding with another whether there is connection relation;It is to belong to same there will be the rectangle marked of connection relation One road object.
According to another embodiment of the application, a kind of road object recognition equipment is provided, including:It is segmented position range Acquiring unit, for according to the reference line of preset section length and road, obtain for the point cloud data to the road into The segmentation position range of row segmentation;Allocation unit, for by the point cloud data of the road, fall into same segmentation position model Point in enclosing is assigned in the same point set;Rectangle acquiring unit, for being directed to each point set, according to the position at point set midpoint The direction for setting the reference line of coordinate and the road obtains covering all the points in the point set along the direction of the reference line Rectangle;Judging unit, for for the adjacent corresponding rectangle of two segmentation position ranges, judging one of them corresponding rectangle Rectangle corresponding with another whether there is connection relation;Marking unit, for being to belong to there will be the rectangle marked of connection relation In same road object.
Compared with prior art, the application has the following advantages:
Compared to the prior art, the application is segmented the point cloud data of road based on road reference line, and by obtaining The rectangle for covering all the points in each segmentation is taken, connection relation whether there is according to the adjacent corresponding rectangle of segmentation, to obtain Belong to the rectangle of the same road object, to carry out the identification of road object to the point cloud data in the rectangle, actually originally The geographic contact between acquired point cloud data is utilized to obtain the rectangle for belonging to same road object in application embodiment, from And in the case where determining the rectangular shaped rim of same road object and then determining the point cloud data in the rectangular shaped rim, to road pair As more accurately being identified.
Description of the drawings
Fig. 1 is a kind of flow chart of road object identifying method embodiment provided by the present application;
Fig. 2 is the flow chart that the step S101 provided by the present application in Fig. 1 is described;
Fig. 3 shows the schematic diagram of the path line of road;
Fig. 4 is the flow chart that the step S103 provided by the present application in Fig. 1 is described;
Fig. 5 shows two lane lines of road and the schematic diagram of corresponding normal;
Fig. 6 shows the schematic diagram for two rectangles for respectively including different point sets;
Fig. 7 is the flow chart that the step S104 provided by the present application in Fig. 1 is described;
Fig. 8 shows the schematic diagram for the target rectangle for merging all rectangles for belonging to same road object;
Fig. 9 shows involved associated straight lines equation during the rectangle to two different in width merges Schematic diagram;
Figure 10 is a kind of schematic diagram of road object recognition equipment embodiment provided by the present application;
Specific implementation mode
Many details are elaborated in the following description in order to fully understand the application.But the application can be with Much implement different from other manner described here, those skilled in the art can be without prejudice to the application intension the case where Under do similar popularization, therefore the application is not limited by following public specific implementation.
This application provides a kind of road object identifying methods and a kind of road object recognition equipment, combine in turn below attached Embodiments herein is described in detail in figure.
Referring to FIG. 1, it is a kind of flow chart of road object identifying method embodiment provided by the present application, the road Object identifying method includes the following steps:
Step S101:According to the reference line of preset section length and road, obtain for the point cloud number to the road According to the segmentation position range being segmented.
Step S102:The point in the point cloud data of the road, fallen into the same segmentation position range is assigned to same In a point set.
Step S103:For each point set, according to the reference line of the position coordinates at point set midpoint and the road Direction obtains the rectangle that all the points in the point set are covered along the direction of the reference line.
Step S104:For the adjacent corresponding rectangle of two segmentation position ranges, one of them corresponding rectangle is judged Rectangle corresponding with another whether there is connection relation.
Step S105:It is to belong to same road object there will be the rectangle marked of connection relation.
Hereafter above-mentioned steps S101~step S105 is described in detail.
First, to step S101:According to the reference line of preset section length and road, obtain for the road The segmentation position range that point cloud data is segmented, is described.
Wherein, the reference line of the road includes but not limited to the rail that the road is generated when acquiring the point cloud data of road The lane line etc. of trace or road itself.The point cloud data of the road, which can refer to, is scanned road based on existing Technology and acquire include the point of attribute informations such as position, reflectance value set, set of these points usually with corresponding road Road object (such as word marking, pictorial symbolization etc. on road) is corresponding.
In step S101, for example, certain road is an ideal straight line one way, including adjacent two parallel tracks Line (such as indicating this two lane lines respectively with R1 and R2), the reference line of road are the lane line of road itself, the long L of road, Preset section length is L/10, then can be using the endpoint of the one end lane line R1 as starting point, and the endpoint of the R1 other ends is as eventually Per the length of spacing L/10, one from this line segment of origin-to-destination (in addition to the start and the end points only) is taken in this line segment for point A point is as waypoint, then using each waypoint as reference point, makees vertical line to lane line R2, the vertical line and lane line R2 phases It hands over, so as to take each waypoint and using the waypoint as the intersection point institute for corresponding to vertical line and lane line R2 made by reference point The line segment of connection, for being segmented to this straight line one way point cloud data, to obtain using the line segment as cut-off rule Adjacent segmentation position range.
According to one embodiment of the application, please refers to Fig.2 and Fig. 3, the reference line of the road are the point of acquisition road The path line of the road is generated when cloud data, then the step S101 includes:
Step S201:The generated time sequence for the tracing point that path line according to road includes, sequence are obtained for institute The target trajectory point that the point cloud data of road is segmented is stated, the distance between continuous two target trajectory points are equal to described default Section length.
Specifically, the path line of the road includes but not limited to for acquiring the sampling instrument of road point cloud data along road The path line that the sampling instrument moves in the cloud data procedures of road collection point, usual sampling instrument is sequentially in time and along road The extending direction tracing point on acquisition trajectories line successively, such as every five seconds for example clock or acquire a tracing point every 50 meters etc., then may be used The tracing point of the acquisition as target trajectory point, can also be often separated by predetermined quantity according to the generated time of tracing point Tracing point takes a tracing point as target trajectory point.
By taking Fig. 3 as an example, a tracing point is acquired every 50 meters, and using the tracing point as target trajectory point, such as by rail Mark point A, B and C ... are as target trajectory point, wherein between two adjacent target trajectory points A and B and between B and C Distance is equal to preset section length.
Step S202:Continuous two target trajectory points are determined as to the endpoint of a segmentation position range.For example, by Fig. 3 Shown in target trajectory point A and B be determined as the endpoint of a segmentation position range (be labeled as range 1), by target trajectory point B and Target trajectory point C is determined as the endpoint of another segmentation position range (being labeled as range 2).
Step S203:The endpoint overlapped using two adjacent segmentation position ranges does vertical line, the vertical line is vertical as intersection point In the orbit segment that two adjacent segmentation position ranges are constituted.For example, being by the terminal B that above range 1 and range 2 overlap Intersection point makees vertical line, the orbit segment (AC) which is constituted perpendicular to two segmentation position ranges.
Step S204:The vertical line is determined as to the shared boundary of the segmentation position range.For example, by shown in Fig. 3 Shared boundary of the vertical line as segmentation position range 1 and range 2.
It according to another embodiment of the application, please refers to Fig.4 and Fig. 5, the reference line of the road is the road Lane line, then the step S101 include:
Step S301:Along the direction of the lane line of road, from the shape point of the lane line, sequence is obtained for institute The target shape point that the point cloud data of road is segmented is stated, the distance between continuous two target shapes point is equal to described default Section length.
Specifically, the lane line of road can be straight line, can also be curve, the shape point example of the lane line Such as include lane line on every preset distance position taken point.The target shape point can be the shape every predetermined quantity The point that dot sequency is taken, such as from the original shape point of top lane line shown in fig. 5, successively for example to shape point number For 1,2 ... N then takes next shape point as target shape point, such as first aim shape point then every 3 shape points For original shape point 1, second target shape point is shape point 5, and third target shape point is shape point 9, and so on.
Step S302:Normal is done to another lane line of the road using each target shape point as intersection point, by normal The associated shape point of the target shape point is determined with the intersection point of another lane line.
Wherein, described to be perpendicular to target shape point place vehicle by normal made by intersection point of each target shape point One line of diatom.
Step S303:Continuous two target trajectory points and its corresponding shape point together are determined as a segmentation position model The endpoint enclosed.For example, as shown in figure 5, target shape point q1, q2 and together shape point q4, q5 are determined as a segmentation position The endpoint of range;Target shape point q2, q3 and together shape point q5, q6 are determined as to the endpoint of another segmentation position range.
Step S304:The line segment that the endpoint overlapped with two adjacent segmentation position ranges is constituted is determined as the fragment bit Set the shared boundary of range.For example, the line constituted with endpoint q2 and q5 that two segmentation position ranges adjacent shown in Fig. 5 overlap Section is determined as the shared boundary of the two segmentation position ranges.
Next, to step S102:In the point cloud data of the road, point same segmentation position range in will be fallen into It assigns in the same point set.By taking Fig. 3 as an example, the point in range 1 will be fallen into point cloud data and is assigned in a point set, will be fallen The point entered in range 2 is assigned in another point set.
Next, to step S103:For each point set, according to the position coordinates at point set midpoint and the road The direction of reference line obtains the rectangle for covering all the points in the point set along the direction of the reference line, is described.
By taking Fig. 6 as an example, the reference line of the road is, for example, the rail that the road is generated when acquiring the point cloud data of road Trace is based on above-mentioned steps S101 and S102, obtains another point set in a point set and the range 2 in range 1.Into One step, according to the extending direction of the position distribution at the two point set midpoints and the path line of road, obtain along path line Direction is covered each by the rectangle 1 and rectangle 2 of all the points in described two point sets.
Next, to step S104:For the adjacent corresponding rectangle of two segmentation position ranges, judge that one of them is right The rectangle answered rectangle corresponding with another whether there is connection relation, be described.
According to one embodiment of the application, referring to FIG. 7, the step S104 includes:
Step S401:For the adjacent corresponding rectangle of two segmentation position ranges, the vertex of each rectangle is obtained described in The distance on the shared boundary of two adjacent segmentation position ranges is as target range.With 1 He of segmentation position range shown in fig. 6 For 2 corresponding rectangle 1 of range and rectangle 2, rectangle 1 is obtained to the distance of the shared boundary BE of the two rectangles as mesh Subject distance.
Step S402:From the corresponding target range of two segmentation position ranges, a shortest distance is obtained respectively.Example Such as, the rectangle 1 and rectangle 2 obtained respectively in Fig. 6 arrives the shortest distance for sharing boundary DE, it is assumed that each one in rectangle 1 and rectangle 2 With shared boundary B E on same straight line, then it is all 0 that rectangle 1 and rectangle 2, which arrive and share the shortest distance on boundary, for side.
Step S403:Judge whether two shortest distances are respectively less than preset distance threshold, if so, two shortest distances There are connection relations for corresponding two rectangles.
Specifically, it is assumed that distance threshold be a certain numerical value Y (positive number), then the most short rectangle of two in Fig. 6 due to all be 0, Respectively less than numerical value Y, then there are connection relations for rectangle 1 and rectangle 2.
Finally, to step S105:It is to belong to same road object there will be the rectangle marked of connection relation, is described. Then it is to belong to same road object by the two rectangle markeds since there are connection relations for rectangle 1 and rectangle 2 by taking Fig. 6 as an example.
According to one embodiment of the application, with continued reference to FIG. 1, the method further includes:
Step S106:The rectangle for belonging to the same road object is merged, the road object corresponding one is obtained A target rectangle, the target rectangle cover the corresponding all rectangles of the same road object.
Specifically, referring to FIG. 8, by merging all rectangles for belonging to same road object, the road is obtained The corresponding target rectangle of road object, the target rectangle cover the corresponding all rectangles of the same road object, Namely cover point cloud corresponding to complete road object.
According to one embodiment of the application, rectangle to be combined can be merged two-by-two, finally be merged into one Target rectangle.If two rectangles to be combined is of different size, referring to FIG. 9, Fig. 9 shows two squares of different size Shape, then by two with the connection relation rectangle for belonging to same road object merge including:
The linear equation of the long side of the big rectangle of width in the rectangle to be combined is obtained, the long side of the rectangle is Two sides parallel with the reference line direction of road, the linear equation are denoted as first straight line equation and second straight line side respectively Journey;
The straight trip equation for obtaining a broadside of the small rectangle of width in the rectangle to be combined, is denoted as third straight line Equation, the broadside are perpendicular to the side in the reference line direction of road and arrive one of the shared boundary of segmentation position range farther out Side;
As previously mentioned, the segmentation position range residing for the rectangle with connection relation, which can exist, shares boundary.
The intersection point and third linear equation and second straight line of the acquisition third linear equation and first straight line equation The intersection point of equation;
Distance to arrive the two intersection points in four vertex of described two intersection points and the big rectangle of the width is remote Four vertex of two vertex as the rectangle after merging, to realize the merging to the rectangle with connection relation.Such as Fig. 9 Shown, A, B, C, D are the vertex of rectangle after merging in figure.
The application compared to the prior art, is segmented a cloud using on the basis of the reference line of road, can effectively keep away Exempt from the loss of data for being segmented position, operation result is avoided to be distorted;Further, the application owns by obtaining to cover in each segmentation The rectangle of point, obtains the rectangle for belonging to same road object according to the position relationship of rectangle, effectively identifies and covered in the rectangle The corresponding complete road object of all the points of lid, avoids the problem that discrimination reduces caused by segmentation.In addition, the application is carried The road object identifying method of confession need not manually participate in that automatic running can be realized, and reduce cost of labor, have stronger reality The property used.
In the above-described embodiment, a kind of road object identifying method is provided, corresponding, the application also provides A kind of road object recognition equipment.Referring to FIG. 10, it is a kind of road object recognition equipment embodiment provided by the invention Schematic diagram.Since the method flow that device embodiment executes is substantially similar to embodiment of the method, device described below is real Apply that example is only schematical, related place referring to embodiment of the method explanation.
A kind of road object recognition equipment provided in this embodiment, including:
It is segmented position range acquiring unit 101, for the reference line according to preset section length and road, is used for The segmentation position range that the point cloud data of the road is segmented;
Allocation unit 102, for by the point cloud data of the road, fall into it is same segmentation position range in point minute Into the same point set;
Rectangle acquiring unit 103, for being directed to each point set, according to the position coordinates at point set midpoint and the road Reference line direction, obtain the rectangle that all the points in the point set are covered along the direction of the reference line;
Judging unit 104, for for the adjacent corresponding rectangle of two segmentation position ranges, judging one of corresponding Rectangle rectangle corresponding with another whether there is connection relation;
Marking unit 105, for being to belong to same road object there will be the rectangle marked of connection relation.
According to one embodiment of the application, referring to FIG. 10, described device further comprises:
Combining unit 106 obtains the road object pair for merging the rectangle for belonging to the same road object The target rectangle answered, the target rectangle cover the corresponding all rectangles of the same road object.
According to one embodiment of the application, the judging unit 104 is specifically used for:
For the corresponding rectangle of adjacent two segmentations position ranges, the vertex of each rectangle is obtained to described adjacent two The distance on the shared boundary of a segmentation position range is as target range;
From the corresponding target range of two segmentation position ranges, a shortest distance is obtained respectively;
Judge whether two shortest distances are respectively less than preset distance threshold, if so, two shortest distances corresponding two There are connection relations for a rectangle.
According to one embodiment of the application, when the reference line of the road is the point cloud data of acquisition road described in generation The path line of road is then segmented position range acquiring unit 101, is specifically used for:
The generated time sequence for the tracing point that path line according to road includes, sequence are obtained for the road The target trajectory point that point cloud data is segmented, the distance between continuous two target trajectory points are equal to the preset sector boss Degree;
Continuous two target trajectory points are determined as to the endpoint of a segmentation position range;
The endpoint overlapped using two adjacent segmentation position ranges does vertical line, the vertical line is perpendicular to the phase as intersection point The orbit segment that two adjacent segmentation position ranges are constituted;
The vertical line is determined as to the shared boundary of the segmentation position range.
According to one embodiment of the application, the reference line of the road is the lane line of the road, is segmented position model Acquiring unit 101 is enclosed, is specifically used for:
Along the direction of the lane line of road, from the shape point of the lane line, sequence is obtained for the road The target shape point that point cloud data is segmented, the distance between continuous two target shapes point are equal to the preset sector boss Degree;
Normal is done to another lane line of the road using each target shape point as intersection point, by normal and another The intersection point of lane line determines the associated shape point of the target shape point;
Continuous two target trajectory points and its corresponding shape point together are determined as to the end of a segmentation position range Point;
The line segment that the endpoint overlapped with two adjacent segmentation position ranges is constituted is determined as the segmentation position range Share boundary.
Although the application is disclosed as above with preferred embodiment, it is not for limiting the application, any this field skill Art personnel are not departing from spirit and scope, can make possible variation and modification, therefore the guarantor of the application Shield range should be subject to the range that the application claim defined.
In a typical configuration, computing device includes one or more processors (CPU), input/output interface, net Network interface and memory.
Memory may include computer-readable medium in volatile memory, random access memory (RAM) and/or The forms such as Nonvolatile memory, such as read-only memory (ROM) or flash memory (flash RAM).Memory is computer-readable medium Example.
1, computer-readable medium can be by any side including permanent and non-permanent, removable and non-removable media Method or technology realize information storage.Information can be computer-readable instruction, data structure, the module of program or other numbers According to.The example of the storage medium of computer includes, but are not limited to phase transition internal memory (PRAM), static RAM (SRAM), dynamic random access memory (DRAM), other kinds of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), fast flash memory bank or other memory techniques, CD-ROM are read-only Memory (CD-ROM), digital versatile disc (DVD) or other optical storages, magnetic tape cassette, tape magnetic disk storage or Other magnetic storage apparatus or any other non-transmission medium can be used for storage and can be accessed by a computing device information.According to Herein defines, and computer-readable medium does not include non-temporary computer readable media (transitory media), is such as modulated Data-signal and carrier wave.
2, it will be understood by those skilled in the art that embodiments herein can be provided as method, system or computer program production Product.Therefore, complete hardware embodiment, complete software embodiment or embodiment combining software and hardware aspects can be used in the application Form.It can be used in the computer that one or more wherein includes computer usable program code moreover, the application can be used The computer program product implemented on storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) Form.

Claims (10)

1. a kind of road object identifying method, which is characterized in that including:
According to the reference line of preset section length and road, point being segmented for the point cloud data to the road is obtained Fragment position range;
The point in the point cloud data of the road, fallen into the same segmentation position range is assigned in the same point set;
For each point set, according to the direction of the position coordinates at point set midpoint and the reference line of the road, obtain along institute The direction for stating reference line covers the rectangle of all the points in the point set;
For the adjacent corresponding rectangle of two segmentation position ranges, judge that one of them corresponding rectangle is corresponding with another Rectangle whether there is connection relation;
It is to belong to same road object there will be the rectangle marked of connection relation.
2. road object identifying method according to claim 1, which is characterized in that the method further includes:
The rectangle for belonging to the same road object is merged, the corresponding target rectangle of the road object, institute are obtained It states target rectangle and covers the corresponding all rectangles of the same road object.
3. road object identifying method according to claim 1 or 2, which is characterized in that described to be directed to adjacent two points The corresponding rectangle of fragment position range judges that one of them corresponding rectangle rectangle corresponding with another whether there is connection relation Including:
For the corresponding rectangle of adjacent two segmentations position ranges, obtains the vertex of each rectangle and be segmented to described adjacent two The distance on the shared boundary of position range is as target range;
From the corresponding target range of two segmentation position ranges, a shortest distance is obtained respectively;
Judge whether two shortest distances are respectively less than preset distance threshold, if so, corresponding two squares of two shortest distances There are connection relations for shape.
4. road object identifying method according to claim 3, which is characterized in that the reference line of the road is acquisition road The path line of the road is generated when the point cloud data on road, then according to the reference line of preset section length and road, is used It is specifically included in the segmentation position range that the point cloud data to the road is segmented:
The generated time sequence for the tracing point that path line according to road includes, sequence are obtained for the point cloud number to the road According to the target trajectory point being segmented, the distance between continuous two target trajectory points are equal to the preset section length;
Continuous two target trajectory points are determined as to the endpoint of a segmentation position range;
The endpoint overlapped using two adjacent segmentation position ranges does vertical line, the vertical line is perpendicular to described adjacent as intersection point The orbit segment that two segmentation position ranges are constituted;
The vertical line is determined as to the shared boundary of the segmentation position range.
5. road object identifying method according to claim 3, which is characterized in that the reference line of the road is the road The lane line on road, the reference line according to preset section length and road are obtained for the point cloud data to the road The segmentation position range being segmented, specifically includes:
Along the direction of the lane line of road, from the shape point of the lane line, sequence is obtained for the point cloud to the road The target shape point that data are segmented, the distance between continuous two target shapes point are equal to the preset section length;
Normal is done to another lane line of the road using each target shape point as intersection point, by normal and another lane line Intersection point determine the associated shape point of the target shape point;
Continuous two target trajectory points and its corresponding shape point together are determined as to the endpoint of a segmentation position range;
The line segment that the endpoint overlapped with two adjacent segmentation position ranges is constituted is determined as sharing for the segmentation position range Boundary.
6. a kind of road object recognition equipment, which is characterized in that including:
It is segmented position range acquiring unit, for the reference line according to preset section length and road, is obtained for described The segmentation position range that the point cloud data of road is segmented;
Allocation unit, for by the point cloud data of the road, fall into it is same segmentation position range in point assign to it is same In a point set;
Rectangle acquiring unit, for being directed to each point set, according to the reference of the position coordinates at point set midpoint and the road The direction of line obtains the rectangle that all the points in the point set are covered along the direction of the reference line;
Judging unit, for for the adjacent corresponding rectangle of two segmentation position ranges, judging one of them corresponding rectangle Rectangle corresponding with another whether there is connection relation;
Marking unit, for being to belong to same road object there will be the rectangle marked of connection relation.
7. road object recognition equipment according to claim 6, which is characterized in that described device further comprises:
Combining unit obtains the road object corresponding one for merging the rectangle for belonging to the same road object A target rectangle, the target rectangle cover the corresponding all rectangles of the same road object.
8. the road object recognition equipment described according to claim 6 or 7, which is characterized in that the judging unit, it is specific to use In:
For the corresponding rectangle of adjacent two segmentations position ranges, obtains the vertex of each rectangle and be segmented to described adjacent two The distance on the shared boundary of position range is as target range;
From the corresponding target range of two segmentation position ranges, a shortest distance is obtained respectively;
Judge whether two shortest distances are respectively less than preset distance threshold, if so, corresponding two squares of two shortest distances There are connection relations for shape.
9. road object recognition equipment according to claim 8, which is characterized in that the reference line of the road is acquisition road The path line of the road is generated when the point cloud data on road, then is segmented position range acquiring unit, is specifically used for:
The generated time sequence for the tracing point that path line according to road includes, sequence are obtained for the point cloud number to the road According to the target trajectory point being segmented, the distance between continuous two target trajectory points are equal to the preset section length;
Continuous two target trajectory points are determined as to the endpoint of a segmentation position range;
The endpoint overlapped using two adjacent segmentation position ranges does vertical line, the vertical line is perpendicular to described adjacent as intersection point The orbit segment that two segmentation position ranges are constituted;
The vertical line is determined as to the shared boundary of the segmentation position range.
10. road object recognition equipment according to claim 8, which is characterized in that the reference line of the road is described The lane line of road is segmented position range acquiring unit, is specifically used for:
Along the direction of the lane line of road, from the shape point of the lane line, sequence is obtained for the point cloud to the road The target shape point that data are segmented, the distance between continuous two target shapes point are equal to the preset section length;
Normal is done to another lane line of the road using each target shape point as intersection point, by normal and another lane line Intersection point determine the associated shape point of the target shape point;
Continuous two target trajectory points and its corresponding shape point together are determined as to the endpoint of a segmentation position range;
The line segment that the endpoint overlapped with two adjacent segmentation position ranges is constituted is determined as sharing for the segmentation position range Boundary.
CN201710154821.4A 2017-03-15 2017-03-15 Road object identification method and device Active CN108629228B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710154821.4A CN108629228B (en) 2017-03-15 2017-03-15 Road object identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710154821.4A CN108629228B (en) 2017-03-15 2017-03-15 Road object identification method and device

Publications (2)

Publication Number Publication Date
CN108629228A true CN108629228A (en) 2018-10-09
CN108629228B CN108629228B (en) 2020-12-01

Family

ID=63686848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710154821.4A Active CN108629228B (en) 2017-03-15 2017-03-15 Road object identification method and device

Country Status (1)

Country Link
CN (1) CN108629228B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109633676A (en) * 2018-11-22 2019-04-16 浙江中车电车有限公司 A kind of method and system based on the laser radar obstruction detection direction of motion
CN109633685A (en) * 2018-11-22 2019-04-16 浙江中车电车有限公司 A kind of method and system based on laser radar obstruction detection state
CN110008921A (en) * 2019-04-12 2019-07-12 北京百度网讯科技有限公司 A kind of generation method of road boundary, device, electronic equipment and storage medium
CN110287904A (en) * 2019-06-27 2019-09-27 武汉中海庭数据技术有限公司 A kind of lane line extracting method, device and storage medium based on crowdsourcing data
CN110471086A (en) * 2019-09-06 2019-11-19 北京云迹科技有限公司 A kind of radar survey barrier system and method
CN111337036A (en) * 2020-05-19 2020-06-26 北京数字绿土科技有限公司 Overlap region data optimization method and device and terminal equipment
CN112528892A (en) * 2020-12-17 2021-03-19 武汉中海庭数据技术有限公司 Unmanned aerial vehicle point cloud lane line extraction method and system
CN113554044A (en) * 2020-04-23 2021-10-26 百度在线网络技术(北京)有限公司 Method, device, equipment and storage medium for acquiring walking road width
US11474247B2 (en) 2018-11-13 2022-10-18 Beijing Didi Infinity Technology And Development Co., Ltd. Methods and systems for color point cloud generation
CN115512601A (en) * 2022-11-15 2022-12-23 武汉智图科技有限责任公司 Automatic splicing method and device for geographic information non-connection linear elements
CN117036541A (en) * 2023-09-18 2023-11-10 腾讯科技(深圳)有限公司 Lane center line generation method, lane center line generation device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592114A (en) * 2011-12-26 2012-07-18 河南工业大学 Method for extracting and recognizing lane line features of complex road conditions
CN103743383A (en) * 2014-02-11 2014-04-23 天津市星际空间地理信息工程有限公司 Automatic extraction method for road information based on point cloud
CN104008387A (en) * 2014-05-19 2014-08-27 山东科技大学 Lane line detection method based on feature point piecewise linear fitting
CN104677363A (en) * 2013-12-03 2015-06-03 北京图盟科技有限公司 Road generating method and road generating device
CN104809449A (en) * 2015-05-14 2015-07-29 重庆大学 Method for automatically detecting lane dotted boundary line of expressway video monitoring system
CN105136153A (en) * 2015-09-11 2015-12-09 江苏大学 Collection device and collection method of exact position of lane line
US20160154999A1 (en) * 2014-12-02 2016-06-02 Nokia Technologies Oy Objection recognition in a 3d scene

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592114A (en) * 2011-12-26 2012-07-18 河南工业大学 Method for extracting and recognizing lane line features of complex road conditions
CN104677363A (en) * 2013-12-03 2015-06-03 北京图盟科技有限公司 Road generating method and road generating device
CN103743383A (en) * 2014-02-11 2014-04-23 天津市星际空间地理信息工程有限公司 Automatic extraction method for road information based on point cloud
CN104008387A (en) * 2014-05-19 2014-08-27 山东科技大学 Lane line detection method based on feature point piecewise linear fitting
US20160154999A1 (en) * 2014-12-02 2016-06-02 Nokia Technologies Oy Objection recognition in a 3d scene
CN104809449A (en) * 2015-05-14 2015-07-29 重庆大学 Method for automatically detecting lane dotted boundary line of expressway video monitoring system
CN105136153A (en) * 2015-09-11 2015-12-09 江苏大学 Collection device and collection method of exact position of lane line

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YONGTAO YU.ET.: "Learning Hierarchical Features for Automated Extraction of Road Markings From 3-D Mobile LiDAR Point Clouds", 《IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING》 *
陈飞: "基于机载LiDAR点云的道路提取方法研究", 《中国优秀硕士学位论文全文数据库 基础科学辑》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11474247B2 (en) 2018-11-13 2022-10-18 Beijing Didi Infinity Technology And Development Co., Ltd. Methods and systems for color point cloud generation
CN109633676A (en) * 2018-11-22 2019-04-16 浙江中车电车有限公司 A kind of method and system based on the laser radar obstruction detection direction of motion
CN109633685A (en) * 2018-11-22 2019-04-16 浙江中车电车有限公司 A kind of method and system based on laser radar obstruction detection state
CN110008921A (en) * 2019-04-12 2019-07-12 北京百度网讯科技有限公司 A kind of generation method of road boundary, device, electronic equipment and storage medium
CN110008921B (en) * 2019-04-12 2021-12-28 北京百度网讯科技有限公司 Road boundary generation method and device, electronic equipment and storage medium
CN110287904A (en) * 2019-06-27 2019-09-27 武汉中海庭数据技术有限公司 A kind of lane line extracting method, device and storage medium based on crowdsourcing data
CN110471086A (en) * 2019-09-06 2019-11-19 北京云迹科技有限公司 A kind of radar survey barrier system and method
CN110471086B (en) * 2019-09-06 2021-12-03 北京云迹科技有限公司 Radar fault detection system and method
CN113554044A (en) * 2020-04-23 2021-10-26 百度在线网络技术(北京)有限公司 Method, device, equipment and storage medium for acquiring walking road width
CN113554044B (en) * 2020-04-23 2023-08-08 百度在线网络技术(北京)有限公司 Walking road width acquisition method, device, equipment and storage medium
CN111337036B (en) * 2020-05-19 2020-08-25 北京数字绿土科技有限公司 Overlap region data optimization method and device and terminal equipment
CN111337036A (en) * 2020-05-19 2020-06-26 北京数字绿土科技有限公司 Overlap region data optimization method and device and terminal equipment
CN112528892A (en) * 2020-12-17 2021-03-19 武汉中海庭数据技术有限公司 Unmanned aerial vehicle point cloud lane line extraction method and system
CN115512601A (en) * 2022-11-15 2022-12-23 武汉智图科技有限责任公司 Automatic splicing method and device for geographic information non-connection linear elements
CN115512601B (en) * 2022-11-15 2023-02-28 武汉智图科技有限责任公司 Automatic splicing method and device for geographic information non-connection linear elements
CN117036541A (en) * 2023-09-18 2023-11-10 腾讯科技(深圳)有限公司 Lane center line generation method, lane center line generation device, electronic equipment and storage medium
CN117036541B (en) * 2023-09-18 2024-01-12 腾讯科技(深圳)有限公司 Lane center line generation method, lane center line generation device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN108629228B (en) 2020-12-01

Similar Documents

Publication Publication Date Title
CN108629228A (en) A kind of road object identifying method and device
JP2019512668A (en) Root deviation recognition method, terminal, and storage medium
CN109839119B (en) Method and device for acquiring bridge floor area of bridge of cross-road bridge
CN108955670A (en) Information acquisition method and device
CN106845324B (en) Method and device for processing guideboard information
CN112154446B (en) Stereo lane line determining method and device and electronic equipment
CN113570665B (en) Road edge extraction method and device and electronic equipment
CN105869211B (en) A kind of recallable amounts method and device
JP2017156179A (en) Facility state detecting method and device setting method
CN108562885B (en) High-voltage transmission line airborne LiDAR point cloud extraction method
CN112116549A (en) Method and device for evaluating point cloud map precision
JP2020042793A (en) Obstacle distribution simulation method, device, and terminal based on probability plot
CN115423968B (en) Power transmission channel optimization method based on point cloud data and live-action three-dimensional model
CN112199453A (en) Traffic hot spot clustering method, device, equipment and computer storage medium
CN115861408A (en) Unmanned aerial vehicle road surface pit inspection method based on laser point tracking and application thereof
CN115139303A (en) Grid well lid detection method, device, equipment and storage medium
CN112733782B (en) Urban functional area identification method based on road network, storage medium and electronic equipment
Carneiro et al. Digital urban morphometrics: automatic extraction and assessment of morphological properties of buildings
US8483478B1 (en) Grammar-based, cueing method of object recognition, and a system for performing same
CN111583406A (en) Pole tower foot base point coordinate calculation method and device and terminal equipment
Słota Full-waveform data for building roof step edge localization
JP2006286019A (en) Automatic generation apparatus and automatic generation method of three-dimensional structure shape, program therefor, and recording medium recording the program
CN118154671A (en) Method and device for determining laser radar installation position and electronic equipment
Tyagur et al. Digital terrain models from mobile laser scanning data in Moravian Karst
CN102706326A (en) Processing method of light beam method aerial triangulation file data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20200430

Address after: 310052 room 508, floor 5, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Alibaba (China) Co.,Ltd.

Address before: 102200, No. 18, No., Changsheng Road, Changping District science and Technology Park, Beijing, China. 1-5

Applicant before: AUTONAVI SOFTWARE Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230506

Address after: 102299 floor 1-5, block B1, 18 Changsheng Road, science and Technology Park, Changping District, Beijing

Patentee after: AUTONAVI SOFTWARE Co.,Ltd.

Address before: 310052 room 508, 5th floor, building 4, No. 699 Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Patentee before: Alibaba (China) Co.,Ltd.

TR01 Transfer of patent right