CN115902815A - Lane line recognition method, terminal device, and computer-readable storage medium - Google Patents

Lane line recognition method, terminal device, and computer-readable storage medium Download PDF

Info

Publication number
CN115902815A
CN115902815A CN202111575632.7A CN202111575632A CN115902815A CN 115902815 A CN115902815 A CN 115902815A CN 202111575632 A CN202111575632 A CN 202111575632A CN 115902815 A CN115902815 A CN 115902815A
Authority
CN
China
Prior art keywords
point cloud
reflectivity
segment
target analysis
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111575632.7A
Other languages
Chinese (zh)
Inventor
皮兴俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suteng Innovation Technology Co Ltd
Original Assignee
Suteng Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suteng Innovation Technology Co Ltd filed Critical Suteng Innovation Technology Co Ltd
Priority to CN202111575632.7A priority Critical patent/CN115902815A/en
Publication of CN115902815A publication Critical patent/CN115902815A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application is applicable to the technical field of radar, and provides a lane line identification method, terminal equipment and a computer-readable storage medium, wherein the method comprises the following steps: determining a target analysis point cloud line segment according to the current point cloud; segmenting the target analysis point cloud line segment according to a preset segmentation rule; whether the current point cloud is the lane line point cloud or not is identified according to the point cloud distance characteristic value of the target analysis point cloud line segment, the point cloud distance characteristic value of each piece of point cloud data, the point cloud reflectivity characteristic value of each piece of point cloud data and the distance characteristic value of the point cloud of the adjacent vertical channel of the target analysis line segment, the target analysis point cloud line segment is determined through the current point cloud, the distance characteristic value and the reflectivity characteristic value of each piece of point cloud are determined in a segmentation mode, whether the current point cloud is the lane line point cloud or not is identified based on the distance characteristic value and the reflectivity characteristic value, and the identification precision of the lane line point cloud can be effectively improved.

Description

Lane line recognition method, terminal device, and computer-readable storage medium
Technical Field
The present application belongs to the field of radar technology, and in particular, to a lane line identification method, a terminal device, and a computer-readable storage medium.
Background
Laser radar is commonly used in the fields of automatic driving, logistics vehicles, robots, public intelligent transportation and the like due to the advantages of high resolution, high sensitivity, strong anti-interference capability, no influence of dark conditions and the like.
In the field of automatic driving, when a vehicle travels on a road, it is necessary to distinguish lanes by recognizing a ground lane line in order to plan a traveling route of the vehicle in advance. Because a layer of material with high reflectivity is often laid on the lane line, the lane line is usually identified by the reflectivity contained in the point cloud data acquired by scanning of the laser radar. However, when the road surface is used for a long time and is worn and repaired for a long time, the identification of the lane lines is inaccurate.
Disclosure of Invention
The embodiment of the application provides a lane line identification method, terminal equipment and a computer readable storage medium, which are used for solving the problem that when a high-reflectivity target object is irradiated on the front side of laser at present, a received signal is supersaturated and a real echo waveform cannot be effectively recovered, so that the deviation of a measurement result is large.
In a first aspect, an embodiment of the present application provides a lane line identification method, including:
determining a target analysis point cloud line segment according to the current point cloud;
segmenting the target analysis point cloud line segment according to a preset segmentation rule;
calculating point cloud distance characteristic values of the target analysis point cloud line segments according to the distance values of the target analysis point cloud line segments, calculating point cloud distance characteristic values of each segment of point cloud data according to the distance values of each segment, and calculating point cloud reflectivity characteristic values of each segment of point cloud data according to the reflectivity values of each segment;
calculating a distance characteristic value of point clouds of adjacent vertical channels of the target analysis line segment;
and identifying whether the current point cloud is the lane line point cloud or not according to the point cloud distance characteristic value of the target analysis point cloud line segment, the point cloud distance characteristic value of each segment of point cloud data, the point cloud reflectivity characteristic value of each segment of point cloud data and the distance characteristic value of the point cloud of the adjacent vertical channel of the target analysis line segment.
In one implementation manner of the first aspect, the determining a target analysis point cloud segment according to a current point cloud includes:
acquiring the width of a lane line and the angle of the lane line relative to the center of the laser radar to determine the point cloud number of the lane line;
and determining a target analysis point cloud line segment according to the current point cloud, the number of the lane line point clouds and a preset multiple.
In one implementation manner of the first aspect, the identifying whether the current point cloud is a lane line point cloud according to a point cloud distance characteristic value of a target analysis point cloud line segment, a point cloud distance characteristic value of each segment of point cloud data, a point cloud reflectivity characteristic value of each segment of point cloud data, and a distance characteristic value of a point cloud of an adjacent vertical channel with the target analysis line segment includes:
identifying conditions are set according to the point cloud distance characteristic value of the target analysis point cloud line segment, the point cloud distance characteristic value of each segment of point cloud data, the point cloud reflectivity characteristic value of each segment of point cloud data and the distance characteristic value of the point cloud of the adjacent vertical channel of the target analysis line segment;
and if the identification condition is met, determining that the current point cloud is the lane line point cloud, otherwise, determining that the current point cloud is not the lane line point cloud.
In one implementation manner of the first aspect, the identification condition includes:
the distance average value of the target analysis point cloud line segment is greater than a first preset threshold value;
the ratio of the reflectivity mean value of the point cloud of the second section of segmentation to the reflectivity mean value of the point cloud of the first section of segmentation, and the ratio of the reflectivity mean value of the point cloud of the second section of segmentation to the reflectivity mean value of the point cloud of the third section of segmentation are both larger than a second preset threshold;
the reflectivity variance of the first segment, the reflectivity variance of the second segment and the reflectivity variance of the third segment are all smaller than a third preset threshold;
the vertical angle of the current point cloud is a negative value;
and the absolute value of the difference value between the distance mean value of the point clouds of the adjacent vertical channels and the distance mean value of the target analysis point cloud line segment is greater than a fourth preset threshold value.
In an implementation manner of the first aspect, after identifying whether the current point cloud is a lane line point cloud according to a point cloud distance characteristic value of a target analysis point cloud line segment, a point cloud distance characteristic value of each segment of point cloud data, a point cloud reflectivity characteristic value of each segment of point cloud data, and a distance characteristic value of a point cloud of an adjacent vertical channel of the target analysis line segment, the method further includes:
and enhancing the reflectivity of the point cloud of the lane line.
In one implementation manner of the first aspect, the enhancing the reflectivity of the point cloud of the lane line includes:
the reflectivity of the lane line point cloud is enhanced based on the enhancement coefficient.
In one implementation manner of the first aspect, after enhancing the reflectivity of the point cloud of lane lines based on the enhancement coefficient, the method further includes:
and if the reflectivity of the enhanced lane line point cloud exceeds the maximum reflectivity limit value, setting the reflectivity of the enhanced lane line point cloud as the maximum reflectivity limit value.
In a second aspect, an embodiment of the present application provides a terminal device, including:
the target determining unit is used for determining a target analysis point cloud line segment according to the current point cloud;
the segmentation unit is used for segmenting the target analysis point cloud line segment according to a preset segmentation rule;
the first calculating unit is used for calculating point cloud distance characteristic values of the target analysis point cloud line segments according to the distance values of the target analysis point cloud line segments, calculating point cloud distance characteristic values of each segment of point cloud data according to the distance values of each segment, and calculating point cloud reflectivity characteristic values of each segment of point cloud data according to each segment reflectivity value;
the second calculation unit is used for calculating a distance characteristic value of the point cloud of the adjacent vertical channel of the target analysis line segment;
and the identification unit is used for identifying whether the current point cloud is the lane line point cloud or not according to the point cloud distance characteristic value of the target analysis point cloud line segment, the point cloud distance characteristic value of each segment of point cloud data, the point cloud reflectivity characteristic value of each segment of point cloud data and the distance characteristic value of the point cloud of the adjacent vertical channel of the target analysis line segment.
In a third aspect, an embodiment of the present application provides a terminal device, where the terminal device includes a processor, a memory, and a computer program stored in the memory and executable on the processor, and the processor, when executing the computer program, implements the lane line identification method according to the first aspect or any optional manner of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, where a computer program is stored, and the computer program, when executed by a processor, implements the lane line identification method according to the first aspect or any optional manner of the first aspect.
In a fifth aspect, an embodiment of the present application provides a computer program product, which, when running on a terminal device, causes the terminal device to execute the lane line identification method according to the first aspect or any optional manner of the first aspect.
Compared with the prior art, the embodiment of the application has the advantages that:
the implementation of the lane line identification method, the terminal device, the computer readable storage medium and the computer program product provided by the embodiment of the application has the following beneficial effects:
according to the lane line identification method, the target analysis point cloud line segment is determined through the current point cloud, the distance characteristic value and the reflectivity characteristic value of each segmented point cloud are determined in a segmented mode, whether the current point cloud is the lane line point cloud or not is identified based on the distance characteristic value and the reflectivity characteristic value, and the lane line point cloud identification precision can be effectively improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a schematic diagram of an operating scenario of a lidar;
fig. 2 is a schematic flowchart of a lane line identification method provided in an embodiment of the present application;
fig. 3 is a schematic view of an application scenario of the lane line identification method provided in the embodiment of the present application;
FIG. 4 is a schematic view of the angle of the lane line relative to the center of the lidar in an embodiment of the application;
FIG. 5 is a schematic segmentation diagram for segmenting a target analysis point cloud line segment according to an embodiment of the present disclosure;
fig. 6 is a schematic flow chart illustrating an implementation of another lane line identification method according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a terminal device according to another embodiment of the present application;
fig. 9 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items. Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing a relative importance or importance.
It should also be appreciated that reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The laser radar is an automatic remote sensing device which uses a laser as a transmitting light source and adopts a photoelectric detection technology for detection. The lidar may include a transmitter, a receiver, a scan control system, and a data processing system, among other components. The working principle of the laser radar is that the emitter emits detection laser to a target object, the target object can reflect the detection laser after the detection laser hits the target object to form echo laser, and the receiver can receive the echo laser and process the received echo laser to obtain information such as distance, size, speed, reflectivity and the like of the target object. The different photoelectric sensors of laser radar emit laser beams with different vertical angles in the air, and the return echo signals of the laser beams after detecting an object can be received by the photoelectric sensors and converted into electric signals, so that the distance measurement can be realized after the analog-to-digital conversion. The scanning control system controls the motor of the laser radar to rotate to the next horizontal angle, and the whole transmitting and receiving device repeats the same action again to obtain the distance values and the reflectivity values of different horizontal angles of the same vertical angle, so that three-dimensional distance information of the space target can be formed, and the shape, the size and the type of the target can be determined conveniently by subsequent further perception processing.
For example, referring to fig. 1, fig. 1 shows a block diagram of a lidar which may include a control and processing unit, a transmitter, a receiver, a transmitting lens, and a receiving lens, as shown in fig. 1. The control and processing unit is used for controlling the laser radar to work according to a certain transmitting and receiving time sequence, and processing received data to obtain a distance result of a target, the transmitter is usually a plurality of semiconductor laser arrays, the transmitter transmits laser according to a certain time sequence under the driving of the control and processing unit, when a target exists in a transmitting direction, the target reflects the laser and returns an echo, the returned echo reaches the receiver through a receiving lens, specifically, the receiving photoelectric sensor can receive the echo, the receiving photoelectric sensor can convert a received optical signal (namely the echo) into an electric signal, amplify the electric signal and perform analog-to-digital conversion to obtain a digital signal corresponding to the electric signal, and the distance result and the reflectivity result of the target are obtained after subsequent digital processing.
In the field of automatic driving, when a vehicle travels on a road, it is necessary to distinguish lanes by recognizing a ground lane line in order to plan a traveling route of the vehicle in advance. At present, the lane lines are generally identified by the reflectivity contained in the point cloud data acquired by laser radar scanning. The reflectivity is an optical reflection capability of the object, the reflectivity is between 0 and 255, the smaller the value is, the lower the reflectivity of the target object is, and the larger the value is, the higher the reflectivity of the target object is. In order to improve the accuracy of lane line recognition, a lane line is usually set to be white or yellow, and a layer of material with high reflectivity is laid on the lane line so that the reflectivity is between 30 and 60, while the reflectivity of a road surface is usually black and is about 10. When the road surface is used for a long time, the road surface and the lane line are worn out due to the maintenance over the years, so that the reflectivity difference between the road surface and the lane line becomes smaller, for example, the reflectivity of the worn ground surface is about 8, and the reflectivity of the worn lane line is between 13 and 15. At the moment, the point cloud scanned by the laser radar is used, the difference degree between the reflectivity of the lane line and the reflectivity of the ground is very small, and the recognition of the lane line is inaccurate.
The lane line identification method provided by the embodiment of the present application will be described in detail below:
referring to fig. 2, fig. 2 is a flowchart illustrating an exemplary implementation of a lane line identification method according to an embodiment of the present disclosure. It should be noted that an execution main body of the lane line identification method provided in the embodiment of the present application may be a laser radar, specifically, a control and processing unit inside the laser radar, or a terminal device in communication connection with the laser radar, where the terminal device may be a mobile terminal such as a smart phone, a tablet computer, or a wearable device, or may be a computer, a cloud server, a radar-assisted computer, or other devices in various application scenarios. It should be noted that. The following description will be given taking the execution subject as a laser radar as an example:
as shown in fig. 2, the lane line identification method provided in the embodiment of the present application may include steps S11 to S14, which are detailed as follows:
s11: and determining a target analysis point cloud line segment according to the current point cloud.
Referring to fig. 3, fig. 3 is a schematic view illustrating an application scenario of the lane line identification method according to the embodiment of the present application. As shown in fig. 3, a laser radar for acquiring point cloud data may be installed on the top of the vehicle, the laser radar may emit laser to the road surface, and after the laser is reflected by the road surface and the lane line, the laser radar may acquire the point cloud data, which includes the point cloud of the road surface and the point cloud of the lane line.
In the embodiment of the application, point cloud data reflected by a road surface and a lane line are obtained through a laser radar, and the point cloud data comprises a distance value and a reflectivity value. Assuming that the point cloud data pair for obtaining the distance value and the reflectivity value is: dis (m, n) and Ref (m, n).
Where Dis (m, n) represents a distance value, ref (m, n) represents a reflectivity, m represents point cloud numbers at different vertical angles, the corresponding vertical angle is AngV (m), n represents point cloud numbers at different horizontal angles, and the corresponding horizontal angle is AngH (n).
It should be noted that the vertical angle and the horizontal angle are determined when the laser radar scans, and when the laser radar scans the road surface, the laser radar may be controlled to scan at a certain vertical angle, and at the vertical angle, the laser radar may be controlled to rotate at the horizontal angle. As an alternative example, the lidar may be rotated horizontally by 360 °, and thus, the lidar may be controlled to rotate by 5 ° each time (it is understood that this angle value may be set according to the acquisition accuracy requirement, which is only an example and not a limitation), and the distance values and reflectance values of different horizontal angles of the same vertical angle can be obtained. And controlling the laser radar to adjust the vertical angle and repeating the operation to obtain the distance values and the reflectivity values of different horizontal angles of different vertical angles.
In the embodiment of the application, a target analysis point cloud line segment can be obtained by taking a section of point cloud data about a horizontal angle with a current point cloud as a center.
In a specific application, assuming that the distance value of the current point cloud data is Dis (m, n) and the reflectivity value of the current point cloud data is Ref (m, n), the distance value of the target analysis point cloud segment can be represented as: dis (m, n-k: n + k), the reflectance value of the target analysis point cloud segment can be expressed as: ref (m, n-k: n + k). Namely, the distance value and the reflectivity value are respectively set to 2k +1 point.
In an embodiment of the application, the step S11 may include the following steps:
obtaining the width of a lane line and the angle of the lane line relative to the center of the laser radar to determine the point cloud number of the lane line;
the obtaining of the lane line width value may be obtaining standard value data of the lane line, or extracting feature point data of the lane line according to the detection data to obtain a possible lane line width value. As an alternative, the center angle of the lane line relative to the laser radar may be determined according to the distance value information of the feature points of the lane line on the left and right sides of the radar in the detection data.
And determining a target analysis point cloud line segment according to the current point cloud, the number of the lane line point clouds and a preset multiple.
In a specific application, the value of k can be determined according to the width of the lane line, a preset multiple and the angle of the lane line relative to the center of the laser radar.
Referring to fig. 4, fig. 4 is a schematic view illustrating an angle of a lane line with respect to a center of a lidar in an embodiment of the present application. As shown in fig. 4, assuming that the width of the lane line is Llane, and the distance between the lane line and the center of the lidar is Dis (m, n), the angle theta of the lane line relative to the center of the lidar can be expressed as:
thetalane=2*atan(Llane/Dis(m,n))。
based on this, can determine the point cloud number of lane line, the point cloud number of lane line represents for PointN:
PointN=thetalane/AngR;
wherein, angR is the horizontal resolution of the laser radar, and can be specifically 0.2.
In order to guarantee to cover the lane line, the event has certain surplus with the point cloud line segment of target analysis, consequently with theoretical lane line point cloud number again by a system of predetermineeing, then get the whole, promptly:
PointN=round(Mr*thetalane/AngR);
wherein Mr is a preset coefficient, and can specifically take a value of 1.2.
For the convenience of calculation, the number of the point clouds of the lane lines can be set to be an upward odd number, namely:
PointN=2*(ceil(PointN+1)/2)-1。
and the point cloud number of the target analysis point cloud line segment is a preset multiple of the point cloud number of the lane line, namely:
PointA=N*PointN;
and N is a preset multiple, and can be specifically 3, namely the point cloud number of the target analysis point cloud line segment is 3 times of the point cloud number of the lane line.
Based on this, it can be determined that the value of K is: k = (PointA-1)/2. For point clouds with the same vertical angle m, taking the distance value and the reflectivity value of the current point cloud as the center, taking K points forward according to the horizontal angle, and then taking K points backward according to the horizontal angle. It should be noted that, taking K points forward refers to taking K point clouds obtained by the laser radar before the time of acquiring the current point cloud, and taking K points backward refers to taking K point clouds obtained by the laser radar after the time of acquiring the current point cloud.
S12: and segmenting the target analysis point cloud line segment according to a preset segmentation rule.
In this embodiment of the application, the preset segmentation rule may be determined based on an application scenario, for example, the target analysis point cloud line segments may be equally divided into N segments; wherein N is a preset multiple in S11. Of course, the preset segmentation rule may also be to divide the target analysis point cloud line segment into N segments according to different point cloud number requirements, and the like, which is not limited herein.
Referring to fig. 5, fig. 5 is a schematic diagram illustrating segmentation of a target analysis point cloud line segment according to an embodiment of the present disclosure. As shown in fig. 5, the preset segmentation rule is taken as an example to equally divide the target analysis point cloud line segments into 3 segments:
after a target analysis point cloud segment is obtained, the distance value of the target analysis point cloud segment is recorded as DisG (u) = Dis (m, n-k: n + k), and the reflectivity value is recorded as RefG (u) = Ref (m, n-k: n + k). Wherein the value of u is 0 to 3PointN-1.
Dividing the distance values into three sections, respectively recording as:
a first stage: disG00 (0;
and a second stage: disG01 (0;
a third stage: disG02 (0.
Dividing the reflectance values into three sections, respectively recording as:
a first stage: refG00 (0;
and a second stage: refG01 (0;
a third stage: refG02 (0.
S13: and calculating the point cloud distance characteristic value of the target analysis point cloud line segment according to the distance value of the target analysis point cloud line segment, calculating the point cloud distance characteristic value of each segment of point cloud data according to the distance value of each segment, and calculating the point cloud reflectivity characteristic value of each segment of point cloud data according to the reflectivity value of each segment.
In the embodiment of the application, the point cloud distance characteristic value of the target analysis point cloud line segment is calculated according to the distance value of the target analysis point cloud line segment, and the point cloud distance characteristic value of the target analysis point cloud line segment comprises the distance mean value and the distance variance of the target analysis point cloud line segment.
The distance mean of the target analysis point cloud line segments is:
Figure BDA0003424724190000101
the distance variance of the target analysis point cloud line segment is as follows:
Figure BDA0003424724190000102
/>
in the embodiment of the present application, the example of dividing the target analysis point cloud line segment into three segments as shown in fig. 5 is taken as an example to illustrate, a point cloud distance characteristic value of the first segment point cloud is calculated according to a distance value of the first segment, a point cloud distance characteristic value of the second end point cloud is calculated according to a distance value of the second end point cloud, and a point cloud distance characteristic value of the third segment point cloud is calculated according to a distance value of the third segment. The point cloud distance characteristic values also comprise distance mean values and distance variances.
The distance mean of the first segment point cloud is:
Figure BDA0003424724190000111
the distance variance of the first segment point cloud is:
Figure BDA0003424724190000112
and similarly, calculating to obtain the distance mean MeanDisG01 of the second section of point cloud, the distance variance SigmaDisG01 of the second section of point cloud, the distance mean MeanDisG02 of the third section of point cloud and the distance variance SigmaDisG02 of the second section of point cloud.
In the embodiment of the application, the point cloud reflectivity characteristic value of the point cloud of the first section is calculated according to the reflectivity value of the first section, the point cloud reflectivity characteristic value of the point cloud of the second end is calculated according to the reflectivity value of the second end, and the point cloud reflectivity characteristic value of the point cloud of the third section is calculated according to the reflectivity value of the third section. The point cloud reflectivity characteristic values comprise reflectivity mean values and reflectivity variances.
The mean value of the reflectivity of the first segment point cloud is:
Figure BDA0003424724190000113
the reflectance variance of the first segment point cloud is:
Figure BDA0003424724190000114
and similarly, calculating to obtain a reflectivity mean value MeanRefG01 of the second-stage point cloud, a reflectivity variance SigmaRefG01 of the second-stage point cloud, a reflectivity mean value MeanRefG02 of the third-stage point cloud and a reflectivity variance SigmaRefG02 of the second-stage point cloud.
S14: and calculating the distance characteristic value of the point cloud of the adjacent vertical channel of the target analysis line segment.
In the embodiment of the present application, the point clouds of the adjacent vertical channels of the target analysis line segments refer to point cloud data of the adjacent vertical channels with different vertical angles and the same horizontal angle, and include the point cloud of the previous vertical channel and the point cloud of the next vertical channel. The distance characteristic value may include a distance average value.
Specifically, the calculating of the distance average of the point clouds of the adjacent vertical channels is as follows:
Figure BDA0003424724190000115
Figure BDA0003424724190000121
s15: and identifying whether the current point cloud is the lane line point cloud or not according to the point cloud distance characteristic value of the target analysis point cloud line segment, the point cloud distance characteristic value of each segment of point cloud data, the point cloud reflectivity characteristic value of each segment of point cloud data and the distance characteristic value of the point cloud of the adjacent vertical channel of the target analysis line segment.
In specific application, identification conditions are set according to the point cloud distance characteristic value of a target analysis point cloud line segment, the point cloud distance characteristic value of each section of point cloud data, the point cloud reflectivity characteristic value of each section of point cloud data and the distance characteristic value of the point cloud of an adjacent vertical channel of the target analysis line segment, if the identification conditions are met, the current point cloud is determined to be the lane line point cloud, and if the identification conditions are not met, the current point cloud is determined not to be the lane line point cloud.
In a specific application, the identification conditions set according to the point cloud distance characteristic value of the target analysis point cloud line segment, the point cloud distance characteristic value of each piece of point cloud data, the point cloud reflectivity characteristic value of each piece of point cloud data, and the distance characteristic value of the point cloud of the adjacent vertical channel of the target analysis line segment may include the following conditions:
condition 1: the distance mean value of the target analysis point cloud line segment is larger than a first preset threshold value. The first preset threshold may be set based on an actual scene, and is not limited herein, and for example, the first preset threshold may be set to 5 centimeters.
Condition 2: the ratio of the reflectivity mean value of the point clouds of the second section of segmentation to the reflectivity mean value of the point clouds of the first section of segmentation, and the ratio of the reflectivity mean value of the point clouds of the second section of segmentation to the reflectivity mean value of the point clouds of the third section of segmentation are all larger than a second preset threshold value. The second preset threshold may also be set based on an actual scene, which is not limited herein, and for example, the second preset threshold may be set to any value between 1.3 and 1.5.
Condition 3: the reflectivity variance of the first segment, the reflectivity variance of the second segment and the reflectivity variance of the third segment are all smaller than a third preset threshold. The third preset threshold may also be set based on an actual scene, and is not limited herein, and for example, the third preset threshold may be set to 5.
Condition 4: the vertical angle of the current point cloud is a negative value.
Condition 5: and the absolute value of the difference value between the distance mean value of the point clouds of the adjacent vertical channels and the distance mean value of the target analysis point cloud line segment is greater than a fourth preset threshold value. The third preset threshold may also be set based on an actual scene, and is not limited herein, and for example, the third preset threshold may be set to 20 centimeters.
When the above 5 conditions are satisfied, it can be determined that the current point cloud is the point cloud on the lane line (i.e., the point cloud on the lane line).
It can be seen from the above that, according to the lane line identification method provided by the embodiment of the application, the target analysis point cloud line segment is determined through the current point cloud, the distance characteristic value and the reflectivity characteristic value of each segmented point cloud are determined in a segmented manner, and then whether the current point cloud is the lane line point cloud or not is identified based on the distance characteristic value and the reflectivity characteristic value, so that the identification precision of the lane line point cloud can be effectively improved.
Referring to fig. 6, fig. 6 is a schematic diagram illustrating a flow chart of a lane line identification method according to another embodiment of the present application. As shown in fig. 6, different from the previous embodiment, the lane line identification method provided in the embodiment of the present application further includes the following steps:
s16: and enhancing the reflectivity of the point cloud of the lane line.
In the embodiment of the application, in order to identify the lane line point cloud for the application of automatic driving and the like, after the current point cloud is determined to be the lane line point cloud, the reflectivity of the current point cloud can be enhanced.
Specifically, the reflectivity of the lane line point cloud may be enhanced based on the enhancement coefficient. Namely:
RefGEnh(K)=RefG(K)*KRat;
RefGEnh (K) is the reflectivity of the enhanced lane line point cloud, refG (K) is the reflectivity of the lane line point cloud obtained from the received point cloud, and KRat is the enhancement coefficient.
In this embodiment of the application, the enhancement coefficient may be determined according to a ratio of the reflectance mean of the point clouds of the second segment to the reflectance mean of the point clouds of the first segment, and a ratio of the reflectance mean of the point clouds of the second segment to the reflectance mean of the point clouds of the third segment, and specifically, a maximum value, a minimum value, or a mean value of the ratio of the reflectance mean of the point clouds of the second segment to the reflectance mean of the point clouds of the first segment, and the ratio of the reflectance mean of the point clouds of the second segment to the reflectance mean of the point clouds of the third segment may be taken. Namely:
KRat = max (Ratio 0100, ratio 0102), or;
KRat = min (Ratio 0100, ratio 0102), or;
KRat=1/2(Ratio0100+Ratio0102)。
in embodiments of the present application, to avoid the reflectivity of the enhanced point cloud exceeding the maximum reflectivity limit, the reflectivity of the enhanced point cloud may be limited based on the maximum reflectivity limit. That is, if the reflectivity of the enhanced lane line point cloud exceeds the maximum reflectivity limit, the reflectivity of the enhanced lane line point cloud is set as the maximum reflectivity limit.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by functions and internal logic of the process, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Based on the lane line identification method provided by the above embodiment, the embodiment of the invention further provides an embodiment of the terminal device for implementing the above method embodiment.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application. In the embodiment of the present application, each unit included in the terminal device is configured to execute each step in the embodiment corresponding to fig. 2. Please refer to fig. 2 and the related description of the embodiment corresponding to fig. 2. For convenience of explanation, only the portions related to the present embodiment are shown. As shown in fig. 7, the terminal device 70 includes: a target determination unit 71, a segmentation unit 72, a first calculation unit 73. A second calculation unit 74 and a recognition unit 75. Wherein:
the target determination unit 71 is configured to determine a target analysis point cloud line segment according to the current point cloud.
The segmentation unit 72 is configured to segment the target analysis point cloud line segment according to a preset segmentation rule.
The first calculating unit 73 is configured to calculate a point cloud distance characteristic value of the target analysis point cloud line segment according to the distance value of the target analysis point cloud line segment, calculate a point cloud distance characteristic value of each segment of point cloud data according to the distance value of each segment, and calculate a point cloud reflectivity characteristic value of each segment of point cloud data according to each segment reflectivity value.
The second calculation unit 74 is used to calculate the distance feature value of the point cloud of the adjacent vertical channel to the target analysis line segment.
The identification unit 75 is configured to identify whether the current point cloud is a lane line point cloud according to the point cloud distance characteristic value of the target analysis point cloud line segment, the point cloud distance characteristic value of each segment of point cloud data, the point cloud reflectivity characteristic value of each segment of point cloud data, and the distance characteristic value of the point cloud of the adjacent vertical channel to the target analysis line segment.
In one implementation manner of the embodiment of the present application, the target determining unit 71 may include a number determining unit and a line segment determining unit, where:
the number determining unit is used for obtaining the width of a lane line and the angle of the lane line relative to the center of the laser radar to determine the point cloud number of the lane line;
the line segment determining unit is used for determining a target analysis point cloud line segment according to the current point cloud, the number of the lane line point clouds and a preset multiple.
In one implementation manner of the embodiment of the present application, the identification unit includes a condition setting unit and a determination unit.
The condition setting unit is used for setting identification conditions according to the point cloud distance characteristic value of the target analysis point cloud line segment, the point cloud distance characteristic value of each segment of point cloud data, the point cloud reflectivity characteristic value of each segment of point cloud data and the point cloud distance characteristic value of the adjacent vertical channel of the target analysis line segment;
the judging unit is used for determining that the current point cloud is the lane line point cloud if the identification condition is met, or determining that the current point cloud is not the lane line point cloud if the identification condition is not met.
In an implementation manner of the embodiment of the present application, the terminal device further includes an enhancing unit.
The enhancement unit is used for enhancing the reflectivity of the point cloud of the lane line.
In particular, the reflectivity of the point cloud of the lane line is enhanced based on the enhancement coefficient.
It should be noted that, for the above-mentioned information interaction between the units, the execution process, and other contents, the specific functions and technical effects of the embodiments of the method of the present application are based on the same concept, and specific reference may be made to the embodiments of the method, and details are not described here.
Fig. 8 is a schematic structural diagram of a terminal device according to another embodiment of the present application. As shown in fig. 8, the terminal device 8 provided in this embodiment includes: a processor 80, a memory 81 and a computer program 82, such as an image segmentation program, stored in said memory 81 and executable on said processor 80. The processor 80, when executing the computer program 82, implements the steps in the above-described respective lane line identification method embodiments, such as S11 to S15 shown in fig. 2. Alternatively, the processor 80 implements the functions of the modules/units in the terminal device embodiments described above, for example, the functions of the units 71 to 75 shown in fig. 7, when executing the computer program 82.
Illustratively, the computer program 82 may be partitioned into one or more modules/units that are stored in the memory 81 and executed by the processor 80 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 82 in the terminal device 8. For example, the computer program 82 may be divided into units, and specific functions of each unit refer to the description in the corresponding embodiment of fig. 7, which is not repeated herein.
The terminal device may include, but is not limited to, a processor 80, a memory 81. Those skilled in the art will appreciate that fig. 8 is merely an example of a terminal device 8, and does not constitute a limitation of terminal device 8, and may include more or fewer components than shown, or some of the components may be combined, or different components, e.g., the terminal device may also include input-output devices, network access devices, buses, etc.
The Processor 80 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 81 may be an internal storage unit of the terminal device 8, such as a hard disk or a memory of the terminal device 8. The memory 81 may also be an external storage device of the terminal device 8, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 8. Further, the memory 81 may also include both an internal storage unit and an external storage device of the terminal device 8. The memory 81 is used for storing the computer programs and other programs and data required by the terminal device. The memory 81 may also be used to temporarily store data that has been output or is to be output.
The embodiment of the application also provides a computer readable storage medium. Referring to fig. 9, fig. 9 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present disclosure, and as shown in fig. 9, a computer program 91 is stored in the computer-readable storage medium 90, and when the computer program 91 is executed by a processor, the lane line identification method can be implemented.
The embodiment of the application provides a computer program product, and when the computer program product runs on a terminal device, the lane line identification method can be realized when the terminal device executes the computer program product.
It is obvious to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional units and modules is merely used as an example, and in practical applications, the foregoing function distribution may be performed by different functional units and modules as needed, that is, the internal structure of the terminal device is divided into different functional units or modules to perform all or part of the above-described functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the description of each embodiment has its own emphasis, and parts that are not described or illustrated in a certain embodiment may refer to the description of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A lane line identification method is characterized by comprising the following steps:
determining a target analysis point cloud line segment according to the current point cloud;
segmenting the target analysis point cloud line segment according to a preset segmentation rule;
calculating point cloud distance characteristic values of the target analysis point cloud line segments according to the distance values of the target analysis point cloud line segments, calculating point cloud distance characteristic values of each segment of point cloud data according to the distance values of each segment, and calculating point cloud reflectivity characteristic values of each segment of point cloud data according to the reflectivity values of each segment;
calculating a distance characteristic value of point clouds of adjacent vertical channels of the target analysis line segment;
and identifying whether the current point cloud is the lane line point cloud or not according to the point cloud distance characteristic value of the target analysis point cloud line segment, the point cloud distance characteristic value of each segment of point cloud data, the point cloud reflectivity characteristic value of each segment of point cloud data and the distance characteristic value of the point cloud of the adjacent vertical channel of the target analysis line segment.
2. The lane line identification method of claim 1, wherein determining a target analysis point cloud segment from a current point cloud comprises:
acquiring the width of a lane line and the angle of the lane line relative to the center of the laser radar to determine the point cloud number of the lane line;
and determining a target analysis point cloud line segment according to the current point cloud, the number of the lane line point clouds and a preset multiple.
3. The lane marking identification method according to claim 1, wherein identifying whether a current point cloud is a lane marking point cloud according to a point cloud distance characteristic value of a target analysis point cloud line segment, a point cloud distance characteristic value of each piece of point cloud data, a point cloud reflectivity characteristic value of each piece of point cloud data, and a distance characteristic value of a point cloud of an adjacent vertical channel to the target analysis line segment comprises:
identifying conditions are set according to the point cloud distance characteristic value of the target analysis point cloud line segment, the point cloud distance characteristic value of each segment of point cloud data, the point cloud reflectivity characteristic value of each segment of point cloud data and the distance characteristic value of the point cloud of the adjacent vertical channel of the target analysis line segment;
and if the identification condition is met, determining that the current point cloud is the lane line point cloud, otherwise, determining that the current point cloud is not the lane line point cloud.
4. The lane line identification method according to claim 3, wherein the identification condition includes:
the distance average value of the target analysis point cloud line segment is greater than a first preset threshold value;
the ratio of the reflectivity mean value of the point clouds of the second section of segmentation to the reflectivity mean value of the point clouds of the first section of segmentation, and the ratio of the reflectivity mean value of the point clouds of the second section of segmentation to the reflectivity mean value of the point clouds of the third section of segmentation are all larger than a second preset threshold value;
the reflectivity variance of the first segment, the reflectivity variance of the second segment and the reflectivity variance of the third segment are all smaller than a third preset threshold;
the vertical angle of the current point cloud is a negative value;
and the absolute value of the difference value between the distance mean value of the point clouds of the adjacent vertical channels and the distance mean value of the target analysis point cloud line segment is greater than a fourth preset threshold value.
5. The lane marking identification method according to claim 1, further comprising, after identifying whether the current point cloud is a lane marking point cloud or not according to the point cloud distance feature value of the target analysis point cloud line segment, the point cloud distance feature value of each piece of point cloud data, the point cloud reflectivity feature value of each piece of point cloud data, and the distance feature value of the point cloud of the adjacent vertical channel to the target analysis line segment:
and enhancing the reflectivity of the point cloud of the lane lines.
6. The lane line identification method of claim 5, wherein the enhancing the reflectivity of the lane line point cloud comprises:
the reflectivity of the lane line point cloud is enhanced based on the enhancement coefficient.
7. The lane line identification method according to claim 6, further comprising, after enhancing the reflectivity of the lane line point cloud based on the enhancement coefficient:
and if the reflectivity of the enhanced lane line point cloud exceeds the maximum reflectivity limit value, setting the reflectivity of the enhanced lane line point cloud as the maximum reflectivity limit value.
8. A terminal device, comprising:
the target determining unit is used for determining a target analysis point cloud line segment according to the current point cloud;
the segmentation unit is used for segmenting the target analysis point cloud line segment according to a preset segmentation rule;
the first calculation unit is used for calculating point cloud distance characteristic values of target analysis point cloud line segments according to the distance values of the target analysis point cloud line segments, calculating the point cloud distance characteristic values of each segment of point cloud data according to the distance values of each segment, and calculating the point cloud reflectivity characteristic values of each segment of point cloud data according to the segmented reflectivity values;
the second calculation unit is used for calculating a distance characteristic value of point clouds of adjacent vertical channels of the target analysis line segment;
and the identification unit is used for identifying whether the current point cloud is the lane line point cloud or not according to the point cloud distance characteristic value of the target analysis point cloud line segment, the point cloud distance characteristic value of each segment of point cloud data, the point cloud reflectivity characteristic value of each segment of point cloud data and the distance characteristic value of the point cloud of the adjacent vertical channel of the target analysis line segment.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor, when executing the computer readable instructions, implements the lane line identification method according to any one of claims 1 to 7.
10. A computer-readable storage medium storing a computer program, wherein the computer-readable instructions, when executed by a processor, implement the lane line identification method according to any one of claims 1 to 7.
CN202111575632.7A 2021-12-21 2021-12-21 Lane line recognition method, terminal device, and computer-readable storage medium Pending CN115902815A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111575632.7A CN115902815A (en) 2021-12-21 2021-12-21 Lane line recognition method, terminal device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111575632.7A CN115902815A (en) 2021-12-21 2021-12-21 Lane line recognition method, terminal device, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN115902815A true CN115902815A (en) 2023-04-04

Family

ID=86488597

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111575632.7A Pending CN115902815A (en) 2021-12-21 2021-12-21 Lane line recognition method, terminal device, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN115902815A (en)

Similar Documents

Publication Publication Date Title
US10908257B2 (en) Signal processing apparatus, signal processing method, and program
US11250288B2 (en) Information processing apparatus and information processing method using correlation between attributes
EP3876141A1 (en) Object detection method, related device and computer storage medium
CN106991389B (en) Device and method for determining road edge
US9846812B2 (en) Image recognition system for a vehicle and corresponding method
EP2889641B1 (en) Image processing apparatus, image processing method, program and image processing system
US11204610B2 (en) Information processing apparatus, vehicle, and information processing method using correlation between attributes
CN111136648B (en) Mobile robot positioning method and device and mobile robot
JP6038422B1 (en) Vehicle determination device, vehicle determination method, and vehicle determination program
US10672141B2 (en) Device, method, system and computer-readable medium for determining collision target object rejection
CN110850859B (en) Robot and obstacle avoidance method and obstacle avoidance system thereof
Sun et al. A robust lane detection method for autonomous car-like robot
US11054245B2 (en) Image processing apparatus, device control system, imaging apparatus, image processing method, and recording medium
KR20190134303A (en) Apparatus and method for image recognition
US11861914B2 (en) Object recognition method and object recognition device
Oniga et al. A fast ransac based approach for computing the orientation of obstacles in traffic scenes
CN115902815A (en) Lane line recognition method, terminal device, and computer-readable storage medium
CN117677862A (en) Pseudo image point identification method, terminal equipment and computer readable storage medium
JP7064400B2 (en) Object detection device
WO2018145245A1 (en) Method, device and system for configuration of a sensor on a moving object
WO2024060209A1 (en) Method for processing point cloud, and radar
US20220196841A1 (en) Object recognition abnormality detection apparatus, object recognition abnormality detection program product, and object recognition abnormality detection method
Tseng et al. Vehicle distance estimation method based on monocular camera
CN116343153A (en) Target detection method and device, electronic equipment and storage medium
CN115100229A (en) Obstacle identification method and device based on depth point cloud data and robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination