CN114926799A - Lane line detection method, device, equipment and readable storage medium - Google Patents
Lane line detection method, device, equipment and readable storage medium Download PDFInfo
- Publication number
- CN114926799A CN114926799A CN202210560929.4A CN202210560929A CN114926799A CN 114926799 A CN114926799 A CN 114926799A CN 202210560929 A CN202210560929 A CN 202210560929A CN 114926799 A CN114926799 A CN 114926799A
- Authority
- CN
- China
- Prior art keywords
- lane line
- current
- current lane
- frame
- detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a lane line detection method, a device, equipment and a readable storage medium, wherein the method comprises the following steps: acquiring a single-frame lane line, and performing lane line identification detection processing on the single-frame lane line to obtain a current lane line; matching the current lane line with a preset map to obtain a local map of the current vehicle; detecting whether the current lane line is accurate or not according to the local map and the current lane line; and when the current lane line is detected to be inaccurate, constructing a virtual lane line. According to the invention, the lane line is identified and detected, and is matched with the high-precision map, so that whether the current lane line is accurate or not is detected after the local map where the vehicle is located is obtained, therefore, the problem that the lane line detection is influenced by a large error generated by data due to hardware precision, environmental factors or sensor jitter is solved, the lane line detection efficiency is improved, meanwhile, the accuracy and reliability of the lane line detection are greatly improved, and the situations that the existing detection technology needs high detection conditions and has large detection errors are reduced.
Description
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a lane line detection method, a lane line detection device, lane line detection equipment and a readable storage medium.
Background
With the rise of intelligent aided driving, lane line detection is an important component of the intelligent aided driving, and the intelligent aided driving is also greatly developed in recent years. The lane line detection technology in the Advanced Driver Assistance Systems (ADAS) auxiliary driving is mainly based on a camera sensor, detects the current lane line through image video analysis, provides lane line information for subsequent lane departure, and effectively performs departure warning.
In the process of conceiving and realizing the application, the inventor of the application finds that in the prior art, the two-dimensional image acquired by the camera sensor is adopted for lane line detection based on image video analysis, the two-dimensional image is greatly influenced by the environment, is easily interfered by non-lane line points particularly under the condition of poor imaging, cannot obtain an ideal effect, and is far difficult to meet the automatic driving technical indexes of L3 and L4 levels. In addition, the lane line detection based on the two-dimensional image information has no way to obtain a direct physical lane line model, needs to be strictly calibrated according to the installation condition of a camera, extracts lane line pixel points on the basis of image semantic segmentation, and trains a large amount of label data to solve the multi-scene application problem.
The foregoing description is provided for general background information and does not necessarily constitute prior art.
Disclosure of Invention
The technical problem to be solved by the embodiments of the present invention is to provide a method, an apparatus, a device and a readable storage medium for detecting lane lines, which can solve the problem that lane line detection is affected by a large error generated by data due to hardware precision, environmental factors or sensor jitter.
In order to solve the above problem, a first aspect of the embodiments of the present application provides a lane line detection method, which at least includes the following steps:
acquiring a single-frame lane line, and performing lane line identification detection processing on the single-frame lane line to obtain a current lane line;
matching the current lane line with a preset map to obtain a local map of the current vehicle;
detecting whether the current lane line is accurate or not according to the local map and the current lane line;
and when the current lane line is detected to be inaccurate, constructing a corresponding virtual lane line.
In a possible implementation manner of the first aspect, after the obtaining the current lane line, the method includes:
obtaining a plurality of frames of lane lines, and performing fusion processing on the plurality of frames of lane lines to obtain the calibrated current lane line.
In a possible implementation manner of the first aspect, the acquiring a multi-frame lane line, and performing fusion processing on the multi-frame lane line to obtain a calibrated current lane line includes:
acquiring multiple frames of lane lines, respectively carrying out lane line identification detection processing on each frame of lane line, and detecting to obtain an initial lane line corresponding to each frame of lane line;
and successively clustering and fusing the multiple groups of initial lane lines according to a target clustering algorithm to obtain the calibrated lane lines.
In a possible implementation manner of the first aspect, after the obtaining the current lane line, the method further includes:
and comparing the current lane line with the historical lane line, and judging whether the current lane line deviates from the historical lane line.
In a possible implementation manner of the first aspect, the performing lane line identification and detection processing on the single-frame lane line includes:
performing camera internal and external parameter calibration on the acquired single-frame lane line to obtain first lane line information;
and carrying out coordinate conversion and lane line fitting treatment on the first lane line in sequence to obtain the current lane line.
In a possible implementation manner of the first aspect, after obtaining the local map where the current vehicle is located, the method further includes:
and acquiring vehicle visual odometer information, and detecting whether the current lane line is accurate or not according to the vehicle visual odometer information.
In a possible implementation manner of the first aspect, the detecting whether the current lane line is accurate according to the local map and the current lane line includes:
acquiring a local map corresponding to the current lane line;
extracting the position information of the landmark object in the local map corresponding to the current lane;
and matching and identifying the current lane line according to the position information of the landmark object, and detecting whether the current lane line is accurate.
Accordingly, a second aspect of embodiments of the present application provides a lane marking detection apparatus, including:
the single-frame lane line extraction module is used for acquiring a single-frame lane line and carrying out lane line identification detection processing on the single-frame lane line to obtain a current lane line;
the map matching module is used for matching the current lane line with a preset map to obtain a local map of the current vehicle;
the lane line detection module is used for detecting whether the current lane line is accurate or not according to the local map and the current lane line;
and the lane line building module is used for building a corresponding virtual lane line when the current lane line is detected to be inaccurate.
The third aspect of the embodiment of the present application further provides a computer device, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the steps of the lane line detection method described in any one of the above when executing the computer program.
The fourth aspect of the embodiments of the present application also provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the lane line detection method described in any one of the above.
The embodiment of the invention has the following beneficial effects:
the embodiment of the invention provides a lane line detection method, a lane line detection device, lane line detection equipment and a readable storage medium, wherein the method comprises the following steps: acquiring a single-frame lane line, and performing lane line identification detection processing on the single-frame lane line to obtain a current lane line; matching the current lane line with a preset map to obtain a local map of the current vehicle; detecting whether the current lane line is accurate or not according to the local map and the current lane line; and when the current lane line is detected to be inaccurate, constructing a corresponding virtual lane line. According to the embodiment of the invention, the lane line is identified and detected, and is matched with the high-precision map, so that whether the current lane line is accurate or not is detected after the local map where the vehicle is located is obtained, and therefore, the problem that the lane line detection is influenced due to larger errors generated by data caused by hardware precision, environmental factors or sensor jitter is solved, the accuracy and reliability of the lane line detection are greatly improved while the lane line detection efficiency is improved, the error of the lane line detection is effectively reduced, and the situations that the existing detection technology needs higher detection conditions and has larger detection errors are reduced.
Drawings
Fig. 1 is a schematic flow chart illustrating a lane line detection method according to an embodiment of the present application;
fig. 2 is a schematic block diagram of a lane line detection apparatus according to an embodiment of the present application;
fig. 3 is a schematic block diagram of a structure of a computer device according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of the present application, it is to be understood that the terms "first", "second", and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or to implicitly indicate the number of technical features indicated. Thus, a feature defined as "first," "second," etc. may explicitly or implicitly include one or more of that feature. In the description of the present application, the meaning of "a plurality" is two or more unless otherwise specified.
The embodiment of the application can be applied to a server, and the server can be an independent server, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, Network service, cloud communication, middleware service, domain name service, security service, Content Delivery Network (CDN), big data and an artificial intelligence platform.
First, the application scenarios that can be provided by the present invention are introduced, for example, a lane line detection method, apparatus, device and readable storage medium are provided, which can realize lane line detection and improve lane line detection efficiency and accuracy.
The first embodiment of the present invention:
please refer to fig. 1.
As shown in fig. 1, the present embodiment provides a lane line detection method, which at least includes the following steps:
s1, acquiring a single-frame lane line, and performing lane line identification detection processing on the single-frame lane line to obtain a current lane line;
s2, matching the current lane line with a preset map to obtain a local map of the current vehicle;
s3, detecting whether the current lane line is accurate or not according to the local map and the current lane line;
and S4, constructing a corresponding virtual lane line when the current lane line is detected to be inaccurate.
In the prior art, the lane line detection based on image video analysis adopts a two-dimensional image acquired by a camera sensor, is greatly influenced by the environment, is easily interfered by non-lane line points particularly under the condition of severe imaging, cannot obtain an ideal effect, and is far from meeting the automatic driving technical indexes of the levels of L3 and L4. In addition, the lane line detection based on the two-dimensional image information has no way to obtain a direct physical lane line model, strict calibration needs to be carried out according to the installation condition of a camera, on the basis of image semantic segmentation, the lane line pixel points are extracted, and a large amount of label data needs to be trained to solve the multi-scene application problem.
In order to solve the above technical problems, this embodiment provides a lane line detection method, where a lane line is identified and detected, and is matched with a high-precision map to detect whether a current lane line is accurate after a local map where a vehicle is located is obtained, so as to solve the problem that data generates a large error due to hardware precision, environmental factors, or sensor jitter to affect lane line detection, improve lane line detection efficiency, greatly improve accuracy and reliability of lane line detection, effectively reduce the error of lane line detection, and reduce the situations that the existing detection technology needs high detection conditions and has a large detection error.
Specifically, in step S1, a single-frame lane line is acquired by the image acquisition device, and lane line identification detection processing is performed on the acquired single-frame lane line, so as to obtain a current lane line.
In step S2, the current lane line obtained by extraction is mainly matched with a preset high-precision map, the position of the high-precision map where the current lane line is located is matched, and a local map near the position is obtained within a preset range of the position.
In step S3, according to the local map and the current lane line at the map position where the current lane line is located, and according to the position data and the characteristics of the current lane line, matching is performed with the lane line in the local map, and it is detected whether the current lane line is accurate.
For step S4, when it is detected that the current lane line is not matched or accurate, a corresponding virtual lane line is constructed instead of the current lane line.
In a preferred embodiment, after the obtaining of the current lane line, the method includes:
acquiring a plurality of frames of lane lines, and performing fusion processing on the plurality of frames of lane lines to obtain the calibrated current lane line.
In a specific embodiment, after step S1, the current lane line may be calibrated by acquiring multiple frames of lane lines and performing fusion processing, so that the calibrated current lane line is used as a basis for detecting a subsequent lane line.
In a preferred embodiment, the acquiring a plurality of frames of lane lines, and performing fusion processing on the plurality of frames of lane lines to obtain a calibrated current lane line includes:
acquiring multiple frames of lane lines, respectively carrying out lane line identification detection processing on each frame of lane line, and detecting to obtain an initial lane line corresponding to each frame of lane line;
and successively clustering and fusing the multiple groups of initial lane lines according to a target clustering algorithm to obtain the calibrated lane lines.
In a specific embodiment, a specific process of performing fusion processing by acquiring multiple frames of lane lines to calibrate a current lane line includes: acquiring multiple frames of lane lines acquired by an image acquisition device, and respectively performing lane line identification detection processing on each frame of lane lines, thereby extracting initial lane lines corresponding to each frame of lane lines and generating multiple groups of initial lane lines; and successively clustering and fusing the multiple groups of initial lane lines according to a target clustering algorithm to obtain the calibrated lane lines.
Illustratively, the target clustering algorithm may be a K-means clustering algorithm or a gaussian mixture model clustering algorithm, and the target clustering algorithm is not limited in this embodiment, and can be determined by those skilled in the art as needed.
Optionally, the performing fusion processing on the multiple frames of lane lines to obtain a calibrated current lane line may specifically include:
obtaining a plurality of lane line positioning data of any target position in a target road according to a plurality of groups of lane line positioning data of the target road; clustering a plurality of lane line positioning data of any target position according to a target clustering algorithm to obtain a lane line clustering center of any target position; repeating the step of obtaining a plurality of lane line positioning data of any target position in the target road according to the plurality of groups of lane line positioning data of the target road to the step of clustering the plurality of lane line positioning data of any target position according to the target clustering algorithm to obtain a lane line clustering center of any target position, so as to obtain a lane line clustering center of a plurality of target positions; and fusing the lane line clustering centers of the plurality of target positions according to a target algorithm to obtain the lane lines of the target roads.
In addition, when the deviation between any lane line positioning data and the pre-stored map data is greater than a preset threshold value, the lane line positioning data is deleted. Illustratively, the preset threshold is 2 times of the width of the current road, and when the difference between the lane longitude and latitude in any lane line positioning data and the lane longitude and latitude corresponding to the lane line in the pre-stored map exceeds the preset threshold, the lane line positioning data is considered as abnormal data, and the abnormal data is deleted, so that the accuracy of lane line positioning and detection is further improved.
In a preferred embodiment, after the obtaining of the current lane line, the method further includes:
and comparing the current lane line with the historical lane line, and judging whether the current lane line deviates from the historical lane line.
In a specific embodiment, after the current lane line is obtained, the current lane line and the historical lane line can be compared, whether the deviation between the current lane line and the historical lane line is larger than a preset threshold value or not is judged, and if yes, the current lane line is judged to be deviated from the historical lane; if not, judging that the current lane line does not deviate from the historical lane.
Optionally, after obtaining the calibrated current lane line, the calibrated current lane line may also be compared with the historical lane line, and whether a deviation between the calibrated current lane line and the historical lane line is greater than a preset threshold value is determined, if yes, it is determined that the calibrated current lane line has deviated from the historical lane; if not, judging that the calibrated current lane line does not deviate from the historical lane.
In a preferred embodiment, the performing lane line identification detection processing on the single-frame lane line includes:
carrying out camera internal and external parameter calibration on the acquired single-frame lane line to obtain first lane line information;
and carrying out coordinate conversion and lane line fitting treatment on the first lane line in sequence to obtain the current lane line.
In a specific embodiment, the step S1 may specifically include: and after the single-frame lane line is obtained, carrying out camera internal and external parameter calibration, coordinate conversion and fitting treatment on the single-frame lane line in sequence, thereby obtaining the current lane line. The calibration of the internal and external parameters of the camera is an essential step in image processing, and mainly comprises the steps of calculating the internal and external parameters and distortion parameters of the camera; carrying out distortion correction on the distortion parameters to generate a corrected image; reconstructing an image three-dimensional scene by using internal and external parameters; the camera calibration relates to four coordinate systems including a world coordinate system, a camera coordinate system, an image physical coordinate system and an image pixel coordinate system, so that coordinate conversion is required; and after the coordinate conversion is completed, fitting the lane line to obtain the current lane line.
In a preferred embodiment, after obtaining the local map of the current vehicle, the method further includes:
and acquiring vehicle visual odometer information, and detecting whether the current lane line is accurate or not according to the vehicle visual odometer information.
In a specific embodiment, after step S2, the method may further include detecting whether the current lane line is accurate by using a visual odometer technology, detecting the current lane line by using vehicle visual odometer information, first obtaining visual odometer information of the current vehicle, and then performing monocular vision-based lane line detection and monocular vision odometer positioning accuracy optimization, where the Visual Odometer (VO) recovers 6-degree-of-freedom information of the vehicle body itself by using image information acquired by the vehicle-mounted camera, where the 6-degree-of-freedom information includes 3-degree-of-freedom rotation and 3-degree-of-freedom translation. The visual sensor can provide abundant perception information, thereby carrying out accurate lane line detection.
In a preferred embodiment, the detecting whether the current lane line is accurate according to the local map and the current lane line includes:
acquiring a local map corresponding to the current lane line;
extracting the position information of the landmark object in the local map corresponding to the current lane;
and matching and identifying the current lane line according to the position information of the landmark object, and detecting whether the current lane line is accurate or not.
In a specific embodiment, for step S4, the local map within the preset range of the current lane line is mainly obtained; extracting the position information of the landmark object in the local map corresponding to the current lane; and matching and identifying the current lane line according to the position information of the landmark object, and detecting whether the current lane line is accurate.
Optionally, the specific process of step S4 is as follows: firstly, acquiring local maps corresponding to a plurality of target road lane lines; illustratively, the local map represents a map containing target road lane lines as well as other traffic elements, which may include road surfaces, street lamps, foliage, sky, zebra crossings, buildings, and the like. The local maps corresponding to the multiple target road lane lines may be obtained from a database in which the local maps corresponding to the multiple target road lane lines are stored in advance, and the local maps corresponding to the multiple target road lane lines may be obtained by performing data fusion on image data acquired by a map acquisition vehicle cruising at the target road lane lines and sensor data. Secondly, extracting the position information of the landmark objects in the local map corresponding to the plurality of target road lane lines; the marking object may be, for example, a zebra crossing, a traffic sign, or a large billboard, for example, when a plurality of target roads are located at an intersection and there is a zebra crossing at each intersection of the target roads and the intersection, then the zebra crossing may be selected as the marking object. When the map collection vehicle collects the target road image, the map collection vehicle is provided with a positioning system (such as a GPS) and an image collection device, so that the traffic element image information and the longitude and latitude information are uploaded. And finally, after global fusion is carried out on the local maps corresponding to the lane lines of the multiple target lanes according to the position information of the landmark objects, matching and identifying the current lane line according to the position information of the landmark objects, and detecting whether the current lane line is accurate or not.
The lane line detection method provided by the embodiment comprises the following steps: acquiring a single-frame lane line, and performing lane line identification detection processing on the single-frame lane line to obtain a current lane line; matching the current lane line with a preset map to obtain a local map of the current vehicle; detecting whether the current lane line is accurate or not according to the local map and the current lane line; and when the current lane line is detected to be inaccurate, constructing a corresponding virtual lane line. This embodiment is through discerning the lane line and detecting, match with the high accuracy map, it is accurate whether to detect the current lane line after obtaining the local map at vehicle place, thereby solve because hardware precision, environmental factor or sensor shake lead to data to produce the problem that great error influences the lane line and detects, when improving lane line detection efficiency, greatly improved the accuracy and the reliability that the lane line detected, effectively reduce the error that the lane line detected, reduce the great condition of current detection technology needs higher detection condition and detection error.
Second embodiment of the invention:
please refer to fig. 2.
As shown in fig. 2, the present embodiment provides a lane line detection apparatus, including:
the single-frame lane line extraction module 100 is configured to acquire a single-frame lane line, and perform lane line identification detection processing on the single-frame lane line to obtain a current lane line;
the map matching module 200 is configured to match the current lane line with a preset map to obtain a local map where the current vehicle is located;
a lane line detection module 300, configured to detect whether the current lane line is accurate according to the local map and the current lane line;
the lane line constructing module 400 is configured to construct a corresponding virtual lane line when it is detected that the current lane line is inaccurate.
Optionally, the lane line detection device may further include:
and the multiframe lane line extraction module is used for acquiring multiframe lane lines, and fusing the multiframe lane lines to obtain the calibrated current lane line.
Optionally, the multi-frame lane line extraction module may specifically include:
the multi-frame lane line extraction unit is used for acquiring multi-frame lane lines, respectively carrying out lane line identification detection processing on each frame of lane lines, and detecting to obtain an initial lane line corresponding to each frame of lane lines;
and the multi-frame lane line clustering unit is used for sequentially clustering and fusing a plurality of groups of initial lane lines according to a target clustering algorithm to obtain the calibrated lane lines.
Optionally, the lane line detection device may further include:
and the history comparison module is used for comparing the current lane line with the history lane line and judging whether the current lane line deviates from the history lane.
Optionally, the single-frame lane line extraction module 100 may specifically include:
the first processing unit is used for carrying out camera internal and external reference calibration on the acquired single-frame lane line to obtain first lane line information;
and the second processing unit is used for carrying out coordinate conversion and lane line fitting processing on the first lane line in sequence to obtain the current lane line.
Optionally, the lane line detection device may further include:
and the visual odometer module is used for acquiring the visual odometer information of the vehicle and detecting whether the current lane line is accurate or not according to the visual odometer information of the vehicle.
Optionally, the lane line detection module 300 may specifically include:
the map acquisition unit is used for acquiring a local map corresponding to the current lane line;
the position extraction unit is used for extracting the position information of the landmark object in the local map corresponding to the current lane;
and the detection unit is used for matching and identifying the current lane line according to the position information of the landmark object and detecting whether the current lane line is accurate or not.
In the embodiment, firstly, a single-frame lane line is obtained through a single-frame lane line extraction module, and lane line identification detection processing is performed on the single-frame lane line to obtain a current lane line; then, the current lane line is matched with a preset map through a map matching module 200 to obtain a local map of the current vehicle; then detecting whether the current lane line is accurate or not through a lane line detection module according to the local map and the current lane line; and finally, the lane line construction module is used for constructing a corresponding virtual lane line when the current lane line is detected to be inaccurate. This embodiment is through discerning the lane line and detecting, match with the high accuracy map, it is accurate whether to detect current lane line behind the local map that obtains the vehicle place, thereby solve because hardware precision, environmental factor or sensor shake lead to data to produce the problem that great error influences lane line detection, when improving lane line detection efficiency, greatly improved the accuracy and the reliability that lane line detected, effectively reduce the error that lane line detected, reduce the great condition of current detection technique needs higher detection condition and detection error.
Referring to fig. 3, an embodiment of the present application further provides a computer device, where the computer device may be a server, and an internal structure of the computer device may be as shown in fig. 3. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the computer designed processor is used to provide computational and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The memory provides an environment for the operating system and the running of computer programs in the non-volatile storage medium. The database of the computer device is used for storing data such as lane line detection methods and the like. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a lane line detection method. The lane line detection method comprises the following steps: acquiring a single-frame lane line, and performing lane line identification detection processing on the single-frame lane line to obtain a current lane line; matching the current lane line with a preset map to obtain a local map of the current vehicle; detecting whether the current lane line is accurate or not according to the local map and the current lane line; and when the current lane line is detected to be inaccurate, constructing a corresponding virtual lane line.
An embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program implements a lane line detection method, including the steps of: acquiring a single-frame lane line, and performing lane line identification detection processing on the single-frame lane line to obtain a current lane line; matching the current lane line with a preset map to obtain a local map of the current vehicle; detecting whether the current lane line is accurate or not according to the local map and the current lane line; and when the current lane line is detected to be inaccurate, constructing a corresponding virtual lane line.
According to the lane line detection method, the lane lines are identified and detected and are matched with the high-precision map, whether the current lane lines are detected accurately or not is obtained after the local map where the vehicle is located is obtained, and therefore the problem that the lane line detection is affected due to large errors of data caused by hardware precision, environmental factors or sensor jitter is solved, the accuracy and reliability of the lane line detection are greatly improved while the lane line detection efficiency is improved, the errors of the lane line detection are effectively reduced, and the situations that the existing detection technology needs high detection conditions and the detection errors are large are reduced.
In the above embodiments of the present invention, the description of each embodiment has its own emphasis, and reference may be made to the related description of other embodiments for parts that are not described in detail in a certain embodiment.
In the embodiments provided in the present application, it should be understood that the disclosed technical content can be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the modules may be a logical division, and in actual implementation, there may be another division, for example, a plurality of modules or components may be combined or integrated into another apparatus, or some features may be omitted, or not executed. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be an indirect coupling or communication connection through some interfaces, units or modules, and may be electrical or in other forms.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing module, or each module may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
The foregoing is directed to the preferred embodiment of the present invention, and it is understood that various changes and modifications may be made by one skilled in the art without departing from the spirit of the invention, and it is intended that such changes and modifications be considered as within the scope of the invention.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium provided herein and used in the examples may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double-rate SDRAM (SSRSDRAM), Enhanced SDRAM (ESDRAM), synchronous link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
Claims (10)
1. A lane line detection method is characterized by at least comprising the following steps:
acquiring a single-frame lane line, and performing lane line identification detection processing on the single-frame lane line to obtain a current lane line;
matching the current lane line with a preset map to obtain a local map of the current vehicle;
detecting whether the current lane line is accurate or not according to the local map and the current lane line;
and when the current lane line is detected to be inaccurate, constructing a corresponding virtual lane line.
2. The lane line detection method according to claim 1, comprising, after the obtaining of the current lane line:
obtaining a plurality of frames of lane lines, and performing fusion processing on the plurality of frames of lane lines to obtain the calibrated current lane line.
3. The method according to claim 2, wherein the acquiring a plurality of frames of lane lines, and performing fusion processing on the plurality of frames of lane lines to obtain a calibrated current lane line includes:
acquiring multiple frames of lane lines, respectively carrying out lane line identification detection processing on each frame of lane line, and detecting to obtain an initial lane line corresponding to each frame of lane line;
and successively clustering and fusing the multiple groups of initial lane lines according to a target clustering algorithm to obtain the calibrated lane lines.
4. The lane line detection method according to claim 1, further comprising, after the obtaining of the current lane line:
and comparing the current lane line with the historical lane line, and judging whether the current lane line deviates from the historical lane line.
5. The lane line detection method according to claim 1, wherein the performing lane line identification detection processing on the single-frame lane line includes:
carrying out camera internal and external parameter calibration on the acquired single-frame lane line to obtain first lane line information;
and carrying out coordinate conversion and lane line fitting treatment on the first lane line in sequence to obtain the current lane line.
6. The lane line detection method according to claim 1, further comprising, after the obtaining of the local map of the current vehicle,:
and acquiring vehicle visual odometer information, and detecting whether the current lane line is accurate or not according to the vehicle visual odometer information.
7. The lane line detection method according to claim 1, wherein the detecting whether the current lane line is accurate according to the local map and the current lane line includes:
acquiring a local map corresponding to the current lane line;
extracting the position information of the landmark object in the local map corresponding to the current lane;
and matching and identifying the current lane line according to the position information of the landmark object, and detecting whether the current lane line is accurate or not.
8. A lane line detection apparatus, comprising:
the single-frame lane line extraction module is used for acquiring a single-frame lane line and carrying out lane line identification detection processing on the single-frame lane line to obtain a current lane line;
the map matching module is used for matching the current lane line with a preset map to obtain a local map of the current vehicle;
the lane line detection module is used for detecting whether the current lane line is accurate or not according to the local map and the current lane line;
and the lane line building module is used for building a corresponding virtual lane line when the current lane line is detected to be inaccurate.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the lane line detection method of any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the lane line detection method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210560929.4A CN114926799A (en) | 2022-05-20 | 2022-05-20 | Lane line detection method, device, equipment and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210560929.4A CN114926799A (en) | 2022-05-20 | 2022-05-20 | Lane line detection method, device, equipment and readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114926799A true CN114926799A (en) | 2022-08-19 |
Family
ID=82811425
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210560929.4A Pending CN114926799A (en) | 2022-05-20 | 2022-05-20 | Lane line detection method, device, equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114926799A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117723070A (en) * | 2024-02-06 | 2024-03-19 | 合众新能源汽车股份有限公司 | Method and device for determining map matching initial value, electronic equipment and storage medium |
-
2022
- 2022-05-20 CN CN202210560929.4A patent/CN114926799A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117723070A (en) * | 2024-02-06 | 2024-03-19 | 合众新能源汽车股份有限公司 | Method and device for determining map matching initial value, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107563419B (en) | Train positioning method combining image matching and two-dimensional code | |
CN108303721B (en) | Vehicle positioning method and system | |
CN108303103B (en) | Method and device for determining target lane | |
CN111858805A (en) | High-precision map updating method, vehicle, server and storage medium | |
CN112667837A (en) | Automatic image data labeling method and device | |
CN113989450B (en) | Image processing method, device, electronic equipment and medium | |
CN109141444B (en) | positioning method, positioning device, storage medium and mobile equipment | |
CN105608693A (en) | Vehicle-mounted panoramic around view calibration system and method | |
CN112740225B (en) | Method and device for determining road surface elements | |
CN113419245B (en) | Real-time mapping system and mapping method based on V2X | |
CN111080784B (en) | Ground three-dimensional reconstruction method and device based on ground image texture | |
US20220057230A1 (en) | Method For Checking Detected Changes To An Environmental Model Of A Digital Area Map | |
CN111750882B (en) | Method and device for correcting vehicle pose during initialization of navigation map | |
CN113034566A (en) | High-precision map construction method and device, electronic equipment and storage medium | |
CN111754388B (en) | Picture construction method and vehicle-mounted terminal | |
CN111750881A (en) | Vehicle pose correction method and device based on light pole | |
CN114663852A (en) | Method and device for constructing lane line graph, electronic equipment and readable storage medium | |
CN114998856A (en) | 3D target detection method, device, equipment and medium of multi-camera image | |
CN114280582A (en) | Calibration and calibration method and device for laser radar, storage medium and electronic equipment | |
CN114926799A (en) | Lane line detection method, device, equipment and readable storage medium | |
CN117274402B (en) | Calibration method and device for camera external parameters, computer equipment and storage medium | |
CN114252897A (en) | Positioning method, positioning device, electronic equipment and computer storage medium | |
CN114821539A (en) | Lane line detection method, apparatus, device and medium based on neural network | |
CN111753901A (en) | Data fusion method, device and system and computer equipment | |
CN116416588A (en) | Lane line prediction method, lane line prediction device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |