CN113255404A - Lane line recognition method and device, electronic device and computer-readable storage medium - Google Patents
Lane line recognition method and device, electronic device and computer-readable storage medium Download PDFInfo
- Publication number
- CN113255404A CN113255404A CN202010086724.8A CN202010086724A CN113255404A CN 113255404 A CN113255404 A CN 113255404A CN 202010086724 A CN202010086724 A CN 202010086724A CN 113255404 A CN113255404 A CN 113255404A
- Authority
- CN
- China
- Prior art keywords
- lane line
- vehicle
- target area
- curve
- clustering
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000005065 mining Methods 0.000 claims abstract description 25
- 230000015654 memory Effects 0.000 claims description 20
- 238000006243 chemical reaction Methods 0.000 claims description 16
- 230000011218 segmentation Effects 0.000 claims description 11
- 238000010586 diagram Methods 0.000 description 7
- 238000009412 basement excavation Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 238000010276 construction Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a lane line identification method, a lane line identification device, an electronic device and a computer-readable storage medium. The application can be applied to the field of automatic driving. The specific implementation scheme is as follows: track mining is carried out by utilizing a plurality of first images of the target area to obtain the driving track of each vehicle, wherein the first images comprise images of the vehicles driving in the target area; fitting to obtain a first curve by using the running track of each vehicle; and determining a first recognition result of the lane line of the target area by using the first curve. The method and the device can reduce the influence of image quality on lane line identification and improve the accuracy of lane line identification.
Description
Technical Field
The present application relates to the field of intelligent transportation, and in particular, to a lane line identification method, apparatus, electronic device, and computer-readable storage medium. The application can be applied to the field of automatic driving.
Background
The lane line has an important role in automatic driving, and can provide important information output for modules such as positioning and decision control. The conventional lane line recognition scheme is to recognize a lane line on a lane line image by an image processing technique. However, this scheme requires a significant lane line on the image, and is greatly affected by the image quality. If the lane line on the image is not clear or is blocked, the lane line cannot be accurately identified.
Disclosure of Invention
The embodiment of the application provides a lane line identification method, which comprises the following steps:
track mining is carried out by utilizing a plurality of first images of the target area to obtain the driving track of each vehicle, wherein the first images comprise images of the vehicles driving in the target area;
fitting to obtain a first curve by using the running track of each vehicle;
and determining a first recognition result of the lane line of the target area by using the first curve.
According to the method and the device, the driving track of each vehicle is obtained by using the plurality of first images of the target area, a first curve is obtained by fitting the driving track of the vehicle, and a first recognition result of the lane line of the target area is determined by using the first curve. Under the condition that the lane lines are damaged and lost or the lane lines in the image are not obvious, the lane lines can still be identified, the influence of the image quality on the lane line identification is reduced, and the accuracy of the lane line identification is improved.
In one embodiment, the method further comprises:
performing semantic cutting on the second image of the target area to obtain a second recognition result of the lane line of the target area;
and obtaining a third recognition result of the lane line of the target area by using the first recognition result and the second recognition result.
In the above embodiment, semantic segmentation is further performed on the second image of the target area to obtain a second recognition result, and a third recognition result of the lane line of the target area is obtained by combining the first recognition result and the second recognition result, so that the influence of lane change driving of the vehicle on the accuracy of track excavation can be reduced, and the lane line can be accurately recognized under various weather and environmental states.
In one embodiment, fitting a first curve using a travel track of each vehicle includes:
clustering by using edge coordinates in the driving track of each vehicle to obtain a plurality of clustering results;
and fitting by using a plurality of edge coordinates included in each clustering result to obtain a first curve.
In the above embodiment, the edge coordinates in the driving track of each vehicle are used for clustering, the edge coordinates of the vehicles driving on different lanes are clustered into different clustering results, and each clustering result can be used to fit to the corresponding curve of a plurality of lanes. Due to the influence of the shooting angle, part of lane lines can be shielded by the vehicles, the vehicles can be shielded mutually, the edge coordinates of the driving tracks of the vehicles are used for curve fitting, the influence of the lane lines and the shielded vehicles can be reduced, and the accuracy of lane line identification is improved.
In one embodiment, a method for obtaining a travel track of each vehicle by performing track mining using a plurality of first images of a target area includes:
performing coordinate conversion on each first image to obtain the coordinate of each first image in a world coordinate system;
and performing track mining by using the coordinates of each first image in the world coordinate system to obtain the coordinates of each vehicle in the world coordinate system.
In the above embodiment, the coordinates of each first image are converted to perform track mining using the coordinates of each first image in the world coordinate system, so as to obtain the coordinates of each vehicle in the world coordinate system, thereby improving the accuracy of track mining and the accuracy of lane line recognition.
In one embodiment, fitting a first curve using a travel track of each vehicle includes:
clustering the edge coordinates of each vehicle under a world coordinate system to obtain a plurality of clustering results;
converting the image including the clustering result into a pixel coordinate system;
and fitting by using a plurality of edge coordinates included in each clustering result under a pixel coordinate system to obtain a first curve.
In the above embodiment, the edge position coordinates of each vehicle are clustered in the world coordinate system, so that the clustering accuracy can be improved, and then the first curve fitted by each clustering result in the pixel coordinate system can be obtained through coordinate conversion.
In one embodiment, the method further comprises:
and calculating corresponding curvature by using the central coordinates in the driving track of the vehicle so as to determine the type of the lane line of the target area according to the curvature, wherein the type comprises at least one of straight running, left turning, turning around and right turning.
In the above embodiment, the curvature corresponding to the vehicle travel track is calculated using the center coordinates in the vehicle track, and therefore the type of lane line of the target area can also be determined from the curvature.
The embodiment of the present application further provides a lane line recognition device, including:
the driving track module is used for carrying out track mining by utilizing a plurality of first images of the target area to obtain the driving track of each vehicle, wherein the first images comprise the images of the vehicles driving in the target area;
the curve fitting module is used for fitting to obtain a first curve by utilizing the running track of each vehicle;
and the first identification module is used for determining a first identification result of the lane line of the target area by using the first curve.
In one embodiment, the apparatus further comprises:
the second recognition module is used for performing semantic cutting on the second image of the target area to obtain a second recognition result of the lane line of the target area;
and the third identification module is used for obtaining a third identification result of the lane line of the target area by using the first identification result and the second identification result.
In one embodiment, the curve fitting module comprises:
the clustering submodule is used for clustering by utilizing the edge coordinates in the driving track of each vehicle to obtain a plurality of clustering results;
and the fitting submodule is used for fitting by utilizing a plurality of edge coordinates included in each clustering result to obtain a first curve.
In one embodiment, the driving trajectory module includes:
the first conversion submodule is used for carrying out coordinate conversion on each first image to obtain the coordinate of each first image in a world coordinate system;
and the track mining submodule is used for carrying out track mining by utilizing the coordinates of each first image in the world coordinate system to obtain the coordinates of each vehicle in the world coordinate system.
In one embodiment, the curve fitting module comprises:
the clustering submodule is used for clustering the edge coordinates of each vehicle under a world coordinate system to obtain a plurality of clustering results;
a second conversion sub-module for converting the image including the clustering result into a pixel coordinate system;
and the fitting submodule is used for fitting by using a plurality of edge coordinates included in each clustering result under a pixel coordinate system to obtain a first curve.
In one embodiment, the apparatus further comprises:
and the type determining module is used for calculating corresponding curvature by using the central coordinates in the driving track of the vehicle so as to determine the type of the lane line of the target area according to the curvature, wherein the type comprises at least one of straight running, left turning, turning around and right turning.
An embodiment of the present application further provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any one of the lane line identification methods of the embodiments of the present application.
The embodiment of the present application further provides a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute any one of the lane line identification methods in the embodiments of the present application.
One embodiment in the above application has the following advantages or benefits: the method comprises the steps of obtaining a driving track of each vehicle by using a plurality of first images of a target area, obtaining a first curve by fitting the driving track of the vehicle, and determining a first recognition result of a lane line of the target area by using the first curve. Under the condition that the lane lines are damaged and lost or the lane lines in the image are not obvious, the lane lines can still be identified, the influence of the image quality on the lane line identification is reduced, and the accuracy of the lane line identification is improved.
Other effects of the above-described alternative will be described below with reference to specific embodiments.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a flowchart of a lane line identification method according to an embodiment of the present application.
Fig. 2 is a schematic view of vehicle coordinates in a lane line identification method according to another embodiment of the present application.
Fig. 3 is a flowchart of a lane line identification method according to another embodiment of the present application.
Fig. 4 is a schematic diagram of lane line fitting in a lane line identification method according to another embodiment of the present application.
Fig. 5a is a flowchart of a lane line identification method according to another embodiment of the present application.
Fig. 5b and 5c are schematic views of determining a lane line type using a curvature in a lane line recognition method according to another embodiment of the present application.
Fig. 6 is a block diagram of a lane line identification apparatus according to an embodiment of the present application.
Fig. 7 is a block diagram of a lane line identification apparatus according to another embodiment of the present application.
Fig. 8 is a block diagram of a lane line identification apparatus according to another embodiment of the present application.
Fig. 9 is a block diagram of an electronic device for implementing the lane line identification method according to the embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a flowchart of a lane line identification method according to an embodiment of the present application, and as shown in fig. 1, the method may include:
step S11, track mining is carried out by utilizing a plurality of first images of the target area to obtain the driving track of each vehicle, wherein the first images comprise the images of the vehicles driving in the target area;
step S12, fitting to obtain a first curve by using the driving track of each vehicle;
step S13 is to determine a first recognition result of the lane line of the target area using the first curve.
In the embodiment of the application, a plurality of first images can be acquired by utilizing the image acquisition equipment arranged at the periphery of the target area. The image capture device may include a camera, etc. external to the vehicle, such as on the sides of a road or at an intersection. The plurality of first images may include continuous multi-frame images in a video captured by the camera, or may be images of a plurality of target areas captured by the camera at discrete time points. The target area may include an area where the intersection is located, or may include an area where a certain section of the road is located.
Track mining is performed on the first image, and various information in the target area can be obtained. For example, each vehicle in the first image can be found by using the vehicle detection frame, and coordinates of each vehicle are obtained, including edge coordinates and center coordinates. The travel track of each vehicle can be obtained by using the coordinates of each vehicle.
In step S12, the first curve may be obtained using a variety of curve fitting methods, such as interpolation or least squares. By using the first curve, the position information of the lane line can be determined, thereby obtaining a first recognition result of the lane line.
For example, the selected edge coordinates may be set in advance according to the photographing angle of the camera. As shown in fig. 2, it is assumed that a coordinate fitting lane line selecting the left side of the vehicle body is set in advance. The detected edge coordinates of the vehicle include four corner coordinates a1, a2, a3, and a4, where a1, a2 are located on the left side of the vehicle body. A first curve is fitted using the coordinates of the left side of the vehicle body of a plurality of vehicles, for example, a1, a2, b1, b2, as the edge of the lane line. In the case of a sufficient number of vehicles, a curve close to the actual lane line can be fitted. The situation that the vehicle blocks the lane line when driving may occur, so that the edge coordinates of some vehicles are far away from the lane line. The data may be washed prior to fitting. And deleting coordinates of the edge of the vehicle body of some vehicles, which are obviously far away from the coordinates of the left sides of the vehicle bodies of other vehicles, and then carrying out curve fitting by using the rest coordinates. For example, the coordinates c1 and c2 on the left side of one vehicle in fig. 3 are deleted, and curve fitting is performed using the remaining coordinates.
Due to the influence of weather, road construction, lane re-planning and the like, the lane line on the target area may be damaged to cause the partial loss of the lane line. Moreover, the quality of the captured images is different, and the lane lines in some images may be missing or blurred. However, the partial absence and the blurring of the lane lines have little influence on the driving behavior of the driver, and the vehicle can still continuously run on the same lane. By the track mining method, some unobvious or non-existent lanes in the image can be mined, so that the influence of the image quality on lane line identification is reduced.
According to the method and the device, the driving track of each vehicle is obtained by using the plurality of first images of the target area, a first curve is obtained by fitting the driving track of the vehicle, and a first recognition result of the lane line of the target area is determined by using the first curve. Under the condition that the lane line is damaged and lost or the lane line in the image is not obvious, the lane line can still be identified, the influence of aspects such as image quality, weather, road construction or lane re-planning on the lane line identification is reduced, and the accuracy of the lane line identification is improved.
In one embodiment, as shown in fig. 3, the method further comprises:
step S21, performing semantic cutting on the second image of the target area to obtain a second recognition result of the lane line of the target area;
and step S22, obtaining a third recognition result of the lane line of the target area by using the first recognition result and the second recognition result.
There is no timing requirement between step S21 and step S11, and there may be a certain order or it may be executed in parallel. The first image and the second image may be the same image or different images.
Semantic segmentation, where visual input can be divided into different semantically interpretable categories, is a fundamental task in computer vision, classifying categories being meaningful in the real world. For example, pixels belonging to a vehicle in an image can be distinguished by semantic segmentation and set to the same color. As another example, pixels belonging to a lane line in an image may be distinguished by semantic segmentation, and set to the same color. On one hand, under the influence of weather, road construction, lane re-planning and the like, errors may exist in lane lines obtained by semantically segmenting the images. On the other hand, semantic segmentation of images depends on pre-trained semantic segmentation model parameters. Under the influence of weather, the shooting angle of the image acquisition equipment may change, so that the pre-trained semantic segmentation model parameters may not be suitable, and the lane lines obtained by recognition may be inaccurate. In the above embodiment, the third recognition result of the lane line in the target area is obtained by combining the first recognition result and the second recognition result, so that the influence of lane change driving of the vehicle on the accuracy of track excavation can be reduced, and the lane line can be accurately recognized in various weather and environmental states. In addition, the third recognition result can be used for carrying out self-adaptive adjustment on the parameters of the semantic segmentation model, and the accuracy of lane line recognition is continuously improved.
Due to the influence of the shooting angle, part of lane lines may be shielded by vehicles, and the vehicles may also be shielded from each other, so that errors may exist in the positions of the vehicle detection frames in the images. For example, the first image of the target area is captured by an electric warning camera disposed on a high bar at the intersection, and the capturing angle of the electric warning camera is not perfectly parallel to the lane line, and may be offset. If the shooting angle of the electric police camera is from left to right, the vehicles on the lane can block the lane line on the right side of the lane, and the vehicles close to each other on the two adjacent lanes can block each other. The left and right of the photographing angle of the present embodiment are described with reference to the vehicle traveling direction. For example, if the photographing range of the electric police camera is biased toward the left side of the vehicle body, the photographing angle of the electric police camera can be considered to be from left to right. In the image, the left lane line of the leftmost lane line is not easy to be shielded under the condition that the shooting angle is from left to right, and the position of the left edge of the vehicle detection frame is clearer and more accurate. Therefore, the edge coordinates of the driving track of each vehicle are used for curve fitting, so that the influence of the lane line and the shielded vehicle on the lane line identification result can be reduced, and the accuracy of lane line identification is improved.
In one embodiment, fitting a first curve using the travel track of each vehicle may include:
clustering by using edge coordinates in the driving track of each vehicle to obtain a plurality of clustering results;
and fitting by using a plurality of edge coordinates included in each clustering result to obtain a first curve.
The edge coordinates in the driving tracks of all vehicles are utilized for clustering, the edge coordinates of the vehicles driving on different lanes are clustered into different clustering results, and each clustering result can be utilized to respectively fit curves corresponding to a plurality of lanes.
In the above embodiment, the edge coordinates in the driving track of each vehicle are used for clustering, the number of lanes can be obtained from the map data, the number of lanes is used as the prior information of clustering, and then the edge coordinates in the driving track of each vehicle are clustered to obtain the clustering result according with the number of lanes. The number of lanes in the map data is used as prior information, so that the clustering accuracy can be improved.
In the above embodiment, the first curve corresponding to the first lane line that is easy to recognize may be used in combination with the lane width information to obtain the first curve corresponding to the other lane line. For example, fitting the plurality of edge coordinates included in each clustering result to obtain a first curve may include:
fitting a plurality of edge coordinates included in the clustering result corresponding to the first lane line to obtain a first curve corresponding to the first lane line;
obtaining prior information of a second lane line according to a first curve corresponding to the first lane line and lane width information;
and according to the prior information of the second lane line, carrying out processing on a plurality of edge positions included in the clustering result corresponding to the second lane line to obtain a first curve corresponding to the second lane line.
As shown in fig. 4, taking a scene in which the first image is captured from a left-to-right angle as an example, in this scene, the clustering result including the left edge coordinates of the leftmost vehicle is used for fitting to obtain a curve L1 corresponding to the first lane line on the leftmost side. In conjunction with the lane width information W, a predictive curve L2 of a second lane line adjacent to the first lane line can be obtained. The L1 may be curve-shift transformed with the width information W, resulting in L2 parallel to L1. Also, a curve L2' corresponding to the second lane line may be obtained by fitting the vehicle edge coordinates using a method similar to the fitting L1. The pre-judging curve L2 is used as prior information and is fused with the curve L2 ', so that a more accurate curve L2' of the second lane line can be obtained through fitting. There are various fusion methods, for example, the parameters of the curves L2 and L2' are weighted by a predetermined weight to obtain a curve L2 ″.
The process of digging the driving track of the vehicle and fitting the driving track of the vehicle can be performed in a pixel coordinate system or a world coordinate system.
In one embodiment, the track mining using the plurality of first images of the target area to obtain the travel track of each vehicle may include:
performing coordinate conversion on each first image to obtain the coordinate of each first image in a world coordinate system;
and performing track mining by using the coordinates of each first image in the world coordinate system to obtain the coordinates of each vehicle in the world coordinate system.
For example, the first image may be converted from the pixel coordinate system to the camera coordinate system and then from the camera coordinate system to the world coordinate system using the intrinsic and extrinsic parameters of the image capture device. In the world coordinate system, the coordinates of each first image have three-dimensional information, and the occlusion parts among a plurality of objects in the first images are relatively reduced. Therefore, the track excavation is carried out under the world coordinate system, the precision of the track excavation can be improved, and the accuracy of lane line identification is improved.
In one embodiment, fitting the first curve using the travel locus of each vehicle may include:
clustering the edge coordinates of each vehicle under a world coordinate system to obtain a plurality of clustering results;
converting the image including the clustering result into a pixel coordinate system;
and fitting by using a plurality of edge coordinates included in each clustering result under a pixel coordinate system to obtain a first curve.
In the above embodiment, the edge position coordinates of each vehicle are clustered in the world coordinate system, so that the clustering accuracy can be improved, the world coordinate system is converted into the camera coordinate system through coordinate conversion, the camera coordinate system is converted into the pixel coordinate system, and a first curve which is fitted by using each clustering result in the pixel coordinate system can be obtained.
In one embodiment, as shown in fig. 5a, the method may further include:
and step S51, calculating corresponding curvature by using the center coordinates in the driving track of the vehicle, so as to determine the type of the lane line of the target area according to the curvature, wherein the type comprises at least one of straight running, left turning, turning around and right turning.
Compared with the edge position coordinates, the corresponding curvature of the center coordinates in the vehicle running track can reflect the curvature of the vehicle running track. The curvature is used to determine the type of lane line.
For example: as shown in fig. 5B and 5c, the curvatures of a plurality of sampling points in the vehicle travel track are calculated, and a point B with the maximum curvature is obtained. Assuming that the y-axis is vector (0,1), the angle α of vector y (0,1) to vector BC and the angle β of vector y (0,1) to vector AB are calculated in conjunction with the start point a and the end point C of the trajectory. If both α and β are smaller than a set threshold Δ h, it can indicate that the trajectory is straight. Otherwise, if α is greater than the threshold, sin (α) >0 represents a right turn, sin (α) <0 represents a left turn.
Fig. 6 is a block diagram of a lane line identification apparatus according to an embodiment of the present application. As shown in fig. 6, the lane line recognition apparatus may include:
a driving track module 61, configured to perform track mining by using a plurality of first images of a target area to obtain a driving track of each vehicle, where the first images include images of vehicles driving in the target area;
a curve fitting module 62, configured to fit a first curve by using the driving trajectory of each vehicle;
the first recognition module 63 is configured to determine a first recognition result of the lane line of the target area by using the first curve.
In one embodiment, as shown in fig. 7, the apparatus further comprises:
the second recognition module 64 is configured to perform semantic segmentation on the second image of the target area to obtain a second recognition result of the lane line of the target area;
and a third recognition module 65, configured to obtain a third recognition result of the lane line in the target area by using the first recognition result and the second recognition result.
In one embodiment, curve fitting module 62 includes:
the clustering submodule 621 is configured to perform clustering by using edge coordinates in the driving track of each vehicle to obtain a plurality of clustering results;
and a fitting submodule 622, configured to perform fitting by using the multiple edge coordinates included in each clustering result to obtain a first curve.
In one embodiment, as shown in fig. 8, the travel track module 61 includes:
the first conversion sub-module 611 is configured to perform coordinate conversion on each first image to obtain coordinates of each first image in a world coordinate system;
and the track mining submodule 612 is configured to perform track mining by using the coordinates of each first image in the world coordinate system to obtain the coordinates of each vehicle in the world coordinate system.
In one embodiment, curve fitting module 62 includes:
the clustering submodule 623 is used for clustering the edge coordinates of each vehicle in a world coordinate system to obtain a plurality of clustering results;
a second conversion submodule 624 for converting the image including the clustering result into a pixel coordinate system;
and a fitting submodule 625, configured to perform fitting by using the multiple edge coordinates included in each clustering result in the pixel coordinate system, so as to obtain a first curve.
In one embodiment, the apparatus further comprises:
and the type determining module 66 is used for calculating corresponding curvature by using the center coordinates in the driving track of the vehicle so as to determine the type of the lane line of the target area according to the curvature, wherein the type comprises at least one of straight running, left turning, turning around and right turning.
The functions of each module in each apparatus in the embodiment of the present application may refer to corresponding descriptions in the above method, and are not described herein again.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Fig. 9 is a block diagram of an electronic device according to the lane line identification method according to the embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 9, the electronic apparatus includes: one or more processors 901, memory 902, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display Graphical information for a Graphical User Interface (GUI) on an external input/output device, such as a display device coupled to the Interface. In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). Fig. 9 illustrates an example of a processor 901.
The memory 902, which is a non-transitory computer-readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules (e.g., the driving trajectory module 61, the curve fitting module 62, and the first recognition module 63 shown in fig. 6) corresponding to the lane line recognition method in the embodiment of the present application. The processor 901 executes various functional applications of the server and data processing, i.e., implements the lane line identification method in the above-described method embodiment, by running non-transitory software programs, instructions, and modules stored in the memory 902.
The memory 902 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the electronic device of the lane line identification method, and the like. Further, the memory 902 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 902 may optionally include a memory remotely disposed from the processor 901, and these remote memories may be connected to the lane line identification method electronic device through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the lane line identification method may further include: an input device 903 and an output device 904. The processor 901, the memory 902, the input device 903 and the output device 904 may be connected by a bus or other means, and fig. 9 illustrates the connection by a bus as an example.
The input device 903 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus of the lane line recognition method, such as an input device of a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or the like. The output devices 904 may include a display device, auxiliary lighting devices (e.g., LEDs), tactile feedback devices (e.g., vibrating motors), and the like. The Display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) Display, and a plasma Display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, Integrated circuitry, Application Specific Integrated Circuits (ASICs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (Cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the driving tracks of all vehicles are obtained by utilizing a plurality of first images of the target area, the driving tracks of the vehicles are fitted to obtain a first curve, and the first curve is utilized to determine the first recognition result of the lane line of the target area. Under the condition that the lane lines are damaged and lost or the lane lines in the image are not obvious, the lane lines can still be identified, the influence of the image quality on the lane line identification is reduced, and the accuracy of the lane line identification is improved. And performing semantic segmentation on a second image of the target area to obtain a second recognition result, and combining the first recognition result and the second recognition result to obtain a third recognition result of the lane line of the target area, so that the influence of lane change driving of the vehicle on the accuracy of track excavation can be reduced, and the lane line can be accurately recognized under various weather and environmental states. The edge coordinates in the driving tracks of all vehicles are utilized for clustering, the edge coordinates of the vehicles driving on different lanes are clustered into different clustering results, and each clustering result can be utilized to respectively fit curves corresponding to a plurality of lanes. Due to the influence of the shooting angle, part of lane lines can be shielded by the vehicles, the vehicles can be shielded mutually, the edge coordinates of the driving tracks of the vehicles are used for curve fitting, the influence of the lane lines and the shielded vehicles can be reduced, and the accuracy of lane line identification is improved. And performing coordinate conversion on each first image to perform track mining by using the coordinates of each first image in the world coordinate system to obtain the coordinates of each vehicle in the world coordinate system, so that the track mining precision can be improved, and the accuracy of lane line identification is improved. The edge position coordinates of each vehicle are clustered under a world coordinate system, so that clustering accuracy can be improved, and a first curve which is fitted by using each clustering result under a pixel coordinate system can be obtained through coordinate conversion. And calculating the curvature corresponding to the vehicle running track by using the center coordinates in the vehicle track, so that the type of the lane line of the target area can be determined according to the curvature.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present application can be achieved, and the present invention is not limited herein.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (14)
1. A lane line identification method is characterized by comprising the following steps:
track mining is carried out by utilizing a plurality of first images of a target area to obtain the driving track of each vehicle, wherein the first images comprise images of the vehicles driving in the target area;
fitting to obtain a first curve by using the driving track of each vehicle;
and determining a first recognition result of the lane line of the target area by using the first curve.
2. The method of claim 1, further comprising:
performing semantic segmentation on the second image of the target area to obtain a second recognition result of the lane line of the target area;
and obtaining a third recognition result of the lane line of the target area by using the first recognition result and the second recognition result.
3. The method of claim 1, wherein fitting a first curve using the travel path of each of the vehicles comprises:
clustering by using edge coordinates in the driving track of each vehicle to obtain a plurality of clustering results;
and fitting by using a plurality of edge coordinates included in each clustering result to obtain the first curve.
4. The method of claim 1, wherein performing trajectory mining using a plurality of first images of the target area to obtain a travel trajectory for each vehicle comprises:
performing coordinate conversion on each first image to obtain the coordinate of each first image in a world coordinate system;
and performing track mining by using the coordinates of the first images in the world coordinate system to obtain the coordinates of the vehicles in the world coordinate system.
5. The method of claim 4, wherein fitting a first curve using the travel path of each of the vehicles comprises:
clustering the edge coordinates of each vehicle under the world coordinate system to obtain a plurality of clustering results;
converting an image including the clustering result into a pixel coordinate system;
and fitting by using a plurality of edge coordinates included in each clustering result under the pixel coordinate system to obtain the first curve.
6. The method of claim 1, further comprising:
and calculating corresponding curvature by using the central coordinates in the vehicle driving track so as to determine the type of the lane line of the target area according to the curvature, wherein the type comprises at least one of straight running, left turning, turning around and right turning.
7. A lane line identification apparatus, comprising:
the driving track module is used for carrying out track mining by utilizing a plurality of first images of a target area to obtain the driving track of each vehicle, wherein the first images comprise the images of the vehicles driving in the target area;
the curve fitting module is used for fitting to obtain a first curve by utilizing the driving track of each vehicle;
and the first identification module is used for determining a first identification result of the lane line of the target area by using the first curve.
8. The apparatus of claim 7, further comprising:
the second recognition module is used for performing semantic cutting on a second image of the target area to obtain a second recognition result of the lane line of the target area;
and the third identification module is used for obtaining a third identification result of the lane line of the target area by using the first identification result and the second identification result.
9. The apparatus of claim 7, wherein the curve fitting module comprises:
the clustering submodule is used for clustering by utilizing the edge coordinates in the driving track of each vehicle to obtain a plurality of clustering results;
and the fitting submodule is used for fitting by utilizing a plurality of edge coordinates included in each clustering result to obtain the first curve.
10. The apparatus of claim 7, wherein the travel track module comprises:
the first conversion submodule is used for carrying out coordinate conversion on each first image to obtain the coordinate of each first image in a world coordinate system;
and the track mining submodule is used for carrying out track mining by utilizing the coordinates of each first image in the world coordinate system to obtain the coordinates of each vehicle in the world coordinate system.
11. The apparatus of claim 10, wherein the curve fitting module comprises:
the clustering submodule is used for clustering the edge coordinates of each vehicle under the world coordinate system to obtain a plurality of clustering results;
a second conversion sub-module for converting the image including the clustering result into a pixel coordinate system;
and the fitting submodule is used for fitting by using a plurality of edge coordinates included in each clustering result under the pixel coordinate system to obtain the first curve.
12. The apparatus of claim 7, further comprising:
and the type determining module is used for calculating corresponding curvature by using the central coordinates in the driving track of the vehicle so as to determine the type of the lane line of the target area according to the curvature, wherein the type comprises at least one of straight running, left turning, turning around and right turning.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010086724.8A CN113255404A (en) | 2020-02-11 | 2020-02-11 | Lane line recognition method and device, electronic device and computer-readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010086724.8A CN113255404A (en) | 2020-02-11 | 2020-02-11 | Lane line recognition method and device, electronic device and computer-readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113255404A true CN113255404A (en) | 2021-08-13 |
Family
ID=77219552
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010086724.8A Withdrawn CN113255404A (en) | 2020-02-11 | 2020-02-11 | Lane line recognition method and device, electronic device and computer-readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113255404A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114283383A (en) * | 2021-12-28 | 2022-04-05 | 河北工程技术学院 | Smart city highway maintenance method, computer equipment and medium |
CN115147802A (en) * | 2022-09-06 | 2022-10-04 | 福思(杭州)智能科技有限公司 | Lane line prediction method, device, medium, program product, and vehicle |
Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103942960A (en) * | 2014-04-22 | 2014-07-23 | 深圳市宏电技术股份有限公司 | Vehicle lane change detection method and device |
WO2015043510A1 (en) * | 2013-09-27 | 2015-04-02 | 比亚迪股份有限公司 | Lane line detection method and system, and method and system for lane deviation prewarning |
CN105005771A (en) * | 2015-07-13 | 2015-10-28 | 西安理工大学 | Method for detecting full line of lane based on optical flow point locus statistics |
CN105320927A (en) * | 2015-03-25 | 2016-02-10 | 中科院微电子研究所昆山分所 | Lane line detection method and system |
CN106127113A (en) * | 2016-06-15 | 2016-11-16 | 北京联合大学 | A kind of road track line detecting method based on three-dimensional laser radar |
KR20170070458A (en) * | 2015-12-14 | 2017-06-22 | 현대자동차주식회사 | Vehicle and controlling method for the vehicle |
CN107330376A (en) * | 2017-06-06 | 2017-11-07 | 广州汽车集团股份有限公司 | A kind of Lane detection method and system |
CN108052880A (en) * | 2017-11-29 | 2018-05-18 | 南京大学 | Traffic monitoring scene actual situation method for detecting lane lines |
CN108177524A (en) * | 2017-12-22 | 2018-06-19 | 联创汽车电子有限公司 | ARHUD systems and its lane line method for drafting |
US20180181817A1 (en) * | 2015-09-10 | 2018-06-28 | Baidu Online Network Technology (Beijing) Co., Ltd. | Vehicular lane line data processing method, apparatus, storage medium, and device |
CN109034047A (en) * | 2018-07-20 | 2018-12-18 | 京东方科技集团股份有限公司 | A kind of method for detecting lane lines and device |
CN109635816A (en) * | 2018-10-31 | 2019-04-16 | 百度在线网络技术(北京)有限公司 | Lane line generation method, device, equipment and storage medium |
CN109670455A (en) * | 2018-12-21 | 2019-04-23 | 联创汽车电子有限公司 | Computer vision lane detection system and its detection method |
CN109753841A (en) * | 2017-11-01 | 2019-05-14 | 比亚迪股份有限公司 | Lane detection method and apparatus |
CN109871752A (en) * | 2019-01-04 | 2019-06-11 | 北京航空航天大学 | A method of lane line is extracted based on monitor video detection wagon flow |
CN109931944A (en) * | 2019-04-02 | 2019-06-25 | 百度在线网络技术(北京)有限公司 | A kind of AR air navigation aid, device, vehicle end equipment, server-side and medium |
CN110077399A (en) * | 2019-04-09 | 2019-08-02 | 魔视智能科技(上海)有限公司 | A kind of vehicle collision avoidance method merged based on roadmarking, wheel detection |
CN110210363A (en) * | 2019-05-27 | 2019-09-06 | 中国科学技术大学 | A kind of target vehicle crimping detection method based on vehicle-mounted image |
CN110598541A (en) * | 2019-08-05 | 2019-12-20 | 香港理工大学深圳研究院 | Method and equipment for extracting road edge information |
-
2020
- 2020-02-11 CN CN202010086724.8A patent/CN113255404A/en not_active Withdrawn
Patent Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015043510A1 (en) * | 2013-09-27 | 2015-04-02 | 比亚迪股份有限公司 | Lane line detection method and system, and method and system for lane deviation prewarning |
CN103942960A (en) * | 2014-04-22 | 2014-07-23 | 深圳市宏电技术股份有限公司 | Vehicle lane change detection method and device |
CN105320927A (en) * | 2015-03-25 | 2016-02-10 | 中科院微电子研究所昆山分所 | Lane line detection method and system |
CN105005771A (en) * | 2015-07-13 | 2015-10-28 | 西安理工大学 | Method for detecting full line of lane based on optical flow point locus statistics |
US20180181817A1 (en) * | 2015-09-10 | 2018-06-28 | Baidu Online Network Technology (Beijing) Co., Ltd. | Vehicular lane line data processing method, apparatus, storage medium, and device |
KR20170070458A (en) * | 2015-12-14 | 2017-06-22 | 현대자동차주식회사 | Vehicle and controlling method for the vehicle |
CN106127113A (en) * | 2016-06-15 | 2016-11-16 | 北京联合大学 | A kind of road track line detecting method based on three-dimensional laser radar |
CN107330376A (en) * | 2017-06-06 | 2017-11-07 | 广州汽车集团股份有限公司 | A kind of Lane detection method and system |
CN109753841A (en) * | 2017-11-01 | 2019-05-14 | 比亚迪股份有限公司 | Lane detection method and apparatus |
CN108052880A (en) * | 2017-11-29 | 2018-05-18 | 南京大学 | Traffic monitoring scene actual situation method for detecting lane lines |
CN108177524A (en) * | 2017-12-22 | 2018-06-19 | 联创汽车电子有限公司 | ARHUD systems and its lane line method for drafting |
CN109034047A (en) * | 2018-07-20 | 2018-12-18 | 京东方科技集团股份有限公司 | A kind of method for detecting lane lines and device |
US20200026930A1 (en) * | 2018-07-20 | 2020-01-23 | Boe Technology Group Co., Ltd. | Lane line detection method and apparatus |
CN109635816A (en) * | 2018-10-31 | 2019-04-16 | 百度在线网络技术(北京)有限公司 | Lane line generation method, device, equipment and storage medium |
CN109670455A (en) * | 2018-12-21 | 2019-04-23 | 联创汽车电子有限公司 | Computer vision lane detection system and its detection method |
CN109871752A (en) * | 2019-01-04 | 2019-06-11 | 北京航空航天大学 | A method of lane line is extracted based on monitor video detection wagon flow |
CN109931944A (en) * | 2019-04-02 | 2019-06-25 | 百度在线网络技术(北京)有限公司 | A kind of AR air navigation aid, device, vehicle end equipment, server-side and medium |
CN110077399A (en) * | 2019-04-09 | 2019-08-02 | 魔视智能科技(上海)有限公司 | A kind of vehicle collision avoidance method merged based on roadmarking, wheel detection |
CN110210363A (en) * | 2019-05-27 | 2019-09-06 | 中国科学技术大学 | A kind of target vehicle crimping detection method based on vehicle-mounted image |
CN110598541A (en) * | 2019-08-05 | 2019-12-20 | 香港理工大学深圳研究院 | Method and equipment for extracting road edge information |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114283383A (en) * | 2021-12-28 | 2022-04-05 | 河北工程技术学院 | Smart city highway maintenance method, computer equipment and medium |
CN115147802A (en) * | 2022-09-06 | 2022-10-04 | 福思(杭州)智能科技有限公司 | Lane line prediction method, device, medium, program product, and vehicle |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110929639B (en) | Method, apparatus, device and medium for determining the position of an obstacle in an image | |
CN111275983B (en) | Vehicle tracking method, device, electronic equipment and computer-readable storage medium | |
CN110738183B (en) | Road side camera obstacle detection method and device | |
CN111723768B (en) | Method, device, equipment and storage medium for vehicle re-identification | |
CN111797187A (en) | Map data updating method and device, electronic equipment and storage medium | |
CN110968718B (en) | Target detection model negative sample mining method and device and electronic equipment | |
CN112528786B (en) | Vehicle tracking method and device and electronic equipment | |
CN112150558B (en) | Obstacle three-dimensional position acquisition method and device for road side computing equipment | |
CN111292531B (en) | Tracking method, device and equipment of traffic signal lamp and storage medium | |
CN111523471B (en) | Method, device, equipment and storage medium for determining lane where vehicle is located | |
CN111767853B (en) | Lane line detection method and device | |
CN111540023B (en) | Monitoring method and device of image acquisition equipment, electronic equipment and storage medium | |
CN111291650A (en) | Automatic parking assistance method and device | |
CN110675635B (en) | Method and device for acquiring external parameters of camera, electronic equipment and storage medium | |
CN115147809B (en) | Obstacle detection method, device, equipment and storage medium | |
CN112668428A (en) | Vehicle lane change detection method, roadside device, cloud control platform and program product | |
CN111652112A (en) | Lane flow direction identification method and device, electronic equipment and storage medium | |
CN111540010B (en) | Road monitoring method and device, electronic equipment and storage medium | |
CN112581533A (en) | Positioning method, positioning device, electronic equipment and storage medium | |
CN111275827A (en) | Edge-based augmented reality three-dimensional tracking registration method and device and electronic equipment | |
CN113255404A (en) | Lane line recognition method and device, electronic device and computer-readable storage medium | |
CN111191619A (en) | Method, device and equipment for detecting virtual line segment of lane line and readable storage medium | |
CN110458815A (en) | There is the method and device of mist scene detection | |
CN111339877A (en) | Method and device for detecting length of blind area, electronic equipment and storage medium | |
CN112528932B (en) | Method and device for optimizing position information, road side equipment and cloud control platform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20211015 Address after: 100176 Room 101, 1st floor, building 1, yard 7, Ruihe West 2nd Road, economic and Technological Development Zone, Daxing District, Beijing Applicant after: Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Address before: 2 / F, baidu building, 10 Shangdi 10th Street, Haidian District, Beijing 100085 Applicant before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd. |
|
TA01 | Transfer of patent application right | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20210813 |
|
WW01 | Invention patent application withdrawn after publication |