CN115273012A - Dotted lane line identification method and device, electronic equipment and computer readable medium - Google Patents

Dotted lane line identification method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN115273012A
CN115273012A CN202211000927.6A CN202211000927A CN115273012A CN 115273012 A CN115273012 A CN 115273012A CN 202211000927 A CN202211000927 A CN 202211000927A CN 115273012 A CN115273012 A CN 115273012A
Authority
CN
China
Prior art keywords
lane line
road image
lane
sampling point
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211000927.6A
Other languages
Chinese (zh)
Inventor
胡禹超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HoloMatic Technology Beijing Co Ltd
Original Assignee
HoloMatic Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HoloMatic Technology Beijing Co Ltd filed Critical HoloMatic Technology Beijing Co Ltd
Priority to CN202211000927.6A priority Critical patent/CN115273012A/en
Publication of CN115273012A publication Critical patent/CN115273012A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the disclosure discloses a method and a device for identifying a dashed lane line, electronic equipment and a computer readable medium. One embodiment of the method comprises: acquiring a road image, wherein the road image is shot by a vehicle front-view camera; extracting lane lines of the road image to obtain a lane line group of the road image; for each road image lane line in the road image lane line group, performing the following identification processing steps: determining the state of the lane line corresponding to the lane line of the road image; in response to the fact that the state of the lane line meets a first preset state condition, performing dotted line compensation processing on the lane line of the road image based on a historical lane line sampling point group included in a preset lane line container to generate compensated lane line information; and determining the compensated lane line information as a dotted lane line identification result. This embodiment can improve the accuracy of the lane line recognition result.

Description

Dotted lane line identification method and device, electronic equipment and computer readable medium
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a method and a device for identifying a dotted lane line, electronic equipment and a computer readable medium.
Background
Dotted lane line identification is a technique for identifying a virtual lane line on a road. At present, when identifying a dashed lane line, the method generally adopts: and carrying out lane line identification on the whole virtual lane line in the road image through a neural network so as to obtain a dotted lane line equation.
However, the inventors have found that when the broken-line lane line recognition is performed in the above manner, there are often technical problems as follows:
if the position closest to the current vehicle in the road image is a blank position between the broken-line lane lines, the extracted broken-line lane line has a high uncertainty at the blank position, that is, there is a case where there is a left or right deviation, thereby causing a decrease in the accuracy of the recognized broken-line lane lines;
the above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, it may contain information that does not form the prior art that is already known to a person of ordinary skill in the art in this country.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a dashed lane line identification method, apparatus, electronic device, and computer readable medium to solve the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a method for identifying a dashed lane line, the method including: acquiring a road image, wherein the road image is shot by a vehicle front-view camera; extracting lane lines of the road image to obtain a road image lane line group, wherein the road image lane lines in the road image lane line group are dotted lane lines of a lane where a current vehicle is located, and each road image lane line in the road image lane line group corresponds to one dotted lane line; for each road image lane line in the road image lane line group, performing the following identification processing steps: determining the state of the lane line corresponding to the lane line of the road image; in response to determining that the state of the lane line meets a first preset state condition, performing dotted line compensation processing on the lane line of the road image based on a historical lane line sampling point group included by a preset lane line container to generate compensated lane line information, wherein each historical lane line sampling point in the historical lane line sampling point group included by the lane line container is a dotted line lane line sampling point identified by a historical frame; and determining the compensated lane line information as a dotted lane line identification result.
In a second aspect, some embodiments of the present disclosure provide a dashed lane line identification apparatus, including: an acquisition unit configured to acquire a road image, wherein the road image is captured by a vehicle front view camera; an extracting unit, configured to extract lane lines from the road image to obtain a road image lane line group, where a road image lane line in the road image lane line group is a dashed lane line of a lane where a current vehicle is located, and each road image lane line in the road image lane line group corresponds to a dashed lane line; a recognition processing unit configured to execute, for each road image lane line in the above-described road image lane line group, the following recognition processing steps: determining the state of the lane line corresponding to the lane line of the road image; in response to determining that the state of the lane line meets a first preset state condition, performing dotted line compensation processing on the lane line of the road image based on a historical lane line sampling point group included by a preset lane line container to generate compensated lane line information, wherein each historical lane line sampling point in the historical lane line sampling point group included by the lane line container is a dotted line lane line sampling point identified by a historical frame; and determining the compensated lane line information as a dotted lane line identification result.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method described in any of the implementations of the first aspect.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium on which a computer program is stored, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect.
In a fifth aspect, some embodiments of the present disclosure provide a computer program product comprising a computer program that, when executed by a processor, implements the method described in any of the implementations of the first aspect above.
The above embodiments of the present disclosure have the following beneficial effects: by the method for identifying the dashed lane line according to some embodiments of the present disclosure, the accuracy of the identified dashed lane line can be improved. Specifically, the reason why the accuracy of the identified broken-line lane line is reduced is that: if the position closest to the current vehicle in the road image is a blank position between the broken-line lane lines, the extracted broken-line lane line has a high uncertainty at the blank position, that is, there is a case where there is a left or right deviation. Based on this, the dashed lane line recognition method of some embodiments of the present disclosure, first, acquires a road image, wherein the road image is captured by a front view camera of a vehicle. Secondly, extracting lane lines of the road image to obtain a road image lane line group, wherein the road image lane lines in the road image lane line group are dotted lane lines of a lane where the current vehicle is located, and each road image lane line in the road image lane line group corresponds to one dotted lane line. Here, by extracting each virtual lane line, it is possible to use for the dotted lane line identification. Then, for each road image lane line in the above-mentioned road image lane line group, the following identification processing steps are performed: firstly, the state of the lane line corresponding to the lane line of the road image is determined. By determining the lane line status, it can be used to determine whether a virtual lane line exists in the closest vicinity in the image field of view of the road image. And secondly, in response to the fact that the state of the lane line meets a first preset state condition, performing dotted line compensation processing on the lane line of the road image based on a historical lane line sampling point group included by a preset lane line container to generate compensated lane line information, wherein each historical lane line sampling point in the historical lane line sampling point group included by the lane line container is a dotted line lane line sampling point identified by a historical frame. Here, the first preset state condition may be used to select a case where the lane line state indicates that a virtual lane line does not exist in the nearest vicinity of the image field of view of the road image. This makes it possible to compensate for the virtual lane line closest to the image field of view by the dotted line compensation process. And because the lane line container is introduced, the broken-line lane line identified by the historical frame can be used for participating in the broken-line compensation processing. Therefore, the method can be used for replacing the nearest virtual lane line in the image visual field, and the situation that the identified lane line deviates leftwards or rightwards due to the fact that the position where the virtual lane line does not exist is directly predicted through a network model is avoided. Thus, the accuracy of the identified dashed lane lines can be improved. And thirdly, determining the compensated lane line information as a lane line identification result. Therefore, the accuracy of the lane line identification result is improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a flow diagram of some embodiments of a dashed lane line identification method according to the present disclosure;
FIG. 2 is a schematic block diagram of some embodiments of a dashed lane line identification apparatus according to the present disclosure;
FIG. 3 is a schematic block diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates a flow 100 of some embodiments of a dashed lane line identification method according to the present disclosure. The method for identifying the dashed lane line comprises the following steps:
step 101, acquiring a road image.
In some embodiments, the subject performing the dotted lane line recognition method may acquire the road image in a wired manner or in a wireless manner. Wherein the road image is captured by a vehicle front camera. The vehicle front view camera may be an on-board front view camera of the current vehicle. The current vehicle may be a vehicle for dashed lane line identification.
And 102, extracting lane lines of the road image to obtain a lane line group of the road image.
In some embodiments, the executing entity may perform lane line extraction on the road image to obtain a lane line group of the road image. The road image lane line in the road image lane line group may be a dashed lane line of a lane where a current vehicle is located, and each road image lane line in the road image lane line group may correspond to one dashed lane line. As an example, there may be two virtual lane lines on the left and right sides of the lane where the current vehicle is located. Then, two road image lane lines may be included in the road image lane line group.
In some optional implementation manners of some embodiments, the performing a lane line extraction on the road image by the performing main body to obtain a lane line group of the road image may include the following steps:
firstly, extracting lane line sampling points of the road image to obtain a lane line sampling point sequence group. Each lane line sampling point sequence in the lane line sampling point sequence group may correspond to a dashed lane line. And secondly, extracting lane line sampling points of the road image through a preset extraction algorithm to obtain a lane line sampling point sequence group. Here, the lane line sampling points may be two-dimensional coordinate points in the image coordinate system of the above-described road image. In addition, when generating the lane line sampling points, the confidence corresponding to each lane line sampling point may also be generated.
By way of example, the above-described extraction algorithm may include, but is not limited to, at least one of: UFLD (Ultra Fast Structure-aware Lane Detection), laneNet (multi-branch Lane Detection network), LSD (Line Segment Detection), and the like.
In practice, for the training sample of the above extraction algorithm, it may not be labeled that the position closest to the current vehicle in the sample lane line image is a blank position between the dashed lane lines. Therefore, the extraction of lane line sampling points can be avoided by the extraction algorithm for the blank position between the dotted line lane lines and the position closest to the current vehicle in the image. Thus, it is possible to avoid a situation where the extracted broken-line lane lines have a high uncertainty at the blank position, i.e., there is a deviation to the left or right. This can be used to improve the accuracy of the dashed lane line recognition result.
And secondly, fitting each lane line sampling point in each lane line sampling point sequence in the lane line sampling point sequence group to generate a road image lane line, so as to obtain a road image lane line group. Here, the road image lane line in the road image lane line group may be a two-dimensional lane line in the image coordinate system.
Step 103, for each road image lane line in the road image lane line group, performing the following identification processing steps:
and step 1031, determining the state of the lane line corresponding to the lane line of the road image.
In some embodiments, the execution subject may determine the lane line state corresponding to the road image lane line group in various ways.
In some optional implementation manners of some embodiments, the determining, by the execution main body, a lane line state corresponding to the lane line of the road image may include:
and in response to the fact that the road image lane line meets the preset lane line position condition, determining the preset lane line approaching identification as a lane line state corresponding to the road image lane line. The preset lane line position condition may be that the minimum distance between the lane line of the road image and the current vehicle in the road image is zero. The road image lane line having a minimum distance of zero from the current vehicle in the road image may indicate that a dotted lane line exists in the road image at the closest. Therefore, the lane line proximity indicator can represent a state in which a dashed lane line exists in the road image at the nearest point.
In some optional implementation manners of some embodiments, the determining, by the execution main body, a lane line state corresponding to the lane line of the road image may further include:
and in response to determining that the road image lane line does not meet the preset lane line position condition, determining that the preset lane line is not close to the mark as a lane line state corresponding to the road image lane line. The minimum distance between the road image lane line and the current vehicle in the road image is not zero, which may indicate that no dotted lane line exists at the nearest position in the road image. Therefore, the above-mentioned lane line non-proximity mark may represent a state where a broken line lane line does not exist in the road image in the nearest place.
Step 1032, in response to determining that the state of the lane line meets the first preset state condition, performing dotted line compensation processing on the lane line of the road image based on the historical lane line sampling point group included in the preset lane line container to generate compensated lane line information.
In some embodiments, the execution body may perform a dotted line compensation process on the road image lane line based on a set of historical lane line sampling points included in a preset lane line container to generate compensated lane line information in response to determining that the lane line state satisfies a first preset state condition. The first preset state condition may be that the lane line state is that the lane line is not close to the mark. Each historical lane line sampling point in the historical lane line sampling point group included in the lane line container is a dotted line lane line sampling point identified by the historical frame. The lane line container may be a container for storing a history lane line sampling point. Each historical lane line sampling point in the historical lane line sampling point group can correspond to a dotted lane line identification result identified by the continuous historical frames. Here, the broken-line lane line recognized by the history frame may be a broken-line lane line recognized from the road image at the history frame time.
As an example, the lane line container may be a preset database table, a list in a cache, or the like.
In practice, different lane line containers may be preset for different dashed lane lines. Here, the historical lane line sampling points in the lane line container may correspond to the same broken lane line as the road image lane line.
In some optional implementation manners of some embodiments, the executing unit may perform a dotted line compensation process on the road image lane line based on a set of historical lane line sampling points included in a preset lane line container to generate compensated lane line information, and may include the following steps:
the method comprises the following steps that firstly, coordinate conversion is carried out on each historical lane line sampling point in a historical lane line sampling point group included in the lane line container, and a sampling point sequence group after conversion is obtained. And the converted sampling points in the converted sampling point sequence group are in a vehicle coordinate system of the current vehicle at the current moment. Here, the coordinate conversion may be a conversion of the lane line sampling points from the vehicle coordinate system at the past frame time to the vehicle coordinate system at the above-described current time.
And secondly, screening each converted sampling point sequence in the converted sampling point sequence group to obtain a set of screened lane line sampling points. First, the road image lane line may be converted from an image coordinate system to the vehicle coordinate system to obtain a target road image lane line. Then, the maximum abscissa value of the target road image lane line in the vehicle coordinate system may be determined. And then, removing the converted sampling points of which the abscissa values are greater than the maximum abscissa value in the converted sampling point sequence group to obtain a removed sampling point sequence group. And finally, each removed sampling point in the removed sampling point sequence group can be used as a post-screening lane line sampling point to obtain a post-screening lane line sampling point set.
And thirdly, determining the coordinate scoring value of each post-screening lane line sampling point in the post-screening lane line sampling point set based on the timestamp of the current moment and the timestamp corresponding to each post-screening lane line sampling point in the post-screening lane line sampling point set, and obtaining a coordinate scoring value set. Wherein, for each post-screening lane line sampling point in the post-screening lane line sampling point set: firstly, the weight corresponding to the confidence of the screened lane line sampling point can be selected from a preset weight confidence relation table. Then, a time difference between the timestamp of the current time and the timestamp of the filtered lane line sampling point may be determined. And then, performing power operation by taking the time difference as an exponential term and a preset forgetting factor as a base number to obtain a coordinate parameter value. Finally, the product of the corresponding weight and the coordinate parameter value may be determined as a coordinate score value.
Here, the weight confidence relation table may be a data table of preset weights and confidence correspondences, and may be used to select a corresponding weight using the confidence. The forgetting factor is a basic parameter for participating in generating the coordinate score value. For example, the forgetting factor may be 1. In addition, the weight may be in the range of [0,1]. The coordinate parameter value may range from (0,1).
And fourthly, removing the filtered lane line sampling point group where the filtered lane line sampling point corresponding to the coordinate score value which does not meet the preset score value condition in the filtered lane line sampling point group set, so as to obtain a removed lane line sampling point group set. The predetermined value of credit condition may be that the coordinate value of credit is greater than or equal to a predetermined threshold (e.g., 0.01).
In practice, the condition that the coordinate score value does not meet the preset score value can represent that the time of the set of the sampling points of the corresponding filtered lane lines is too old. Historical lane line sampling points corresponding to the screened lane line sampling points with the coordinate scoring values not meeting the preset scoring value condition can be deleted from the lane line container, so that the situation that the number of the historical lane line sampling points in the lane line container is infinitely increased due to the fact that the historical lane line sampling points are always in a state that a lane line is not close to a lane line corresponding to the mark when the current vehicle is slow or stops is avoided.
And fifthly, performing curve fitting on each post-screening lane line sampling point in the post-screening lane line sampling point set based on the coordinate scoring value set to generate post-compensation lane line information. The compensated lane line information may include a dashed lane line. The dashed lane lines described above may be three-dimensional lane line equations of the vehicle coordinate system at the current time. Here, each mark score value may be used as a coefficient of the abscissa of the sampling point of the corresponding post-screening lane line, so as to construct an over-determined equation set. Finally, the overdetermined equation set can be solved to obtain a fitted lane line equation as compensated lane line information.
As an example, the solution may be performed by a least squares method, a singular value decomposition method, a random sample consensus algorithm, or the like.
And step 1033, determining the compensated lane line information as a dotted lane line identification result.
In some embodiments, the execution subject may determine the compensated lane line information as a dotted lane line recognition result.
Optionally, the executing body may further execute the following steps:
in a first step, in response to determining that the state of the lane line satisfies a second preset state condition, the lane line of the road image is determined as a recognition result of the lane line of the dotted line. The second preset state condition may be that the lane line state is a lane line approaching mark.
And secondly, sending the determined dotted lane line identification result to a display terminal for displaying.
Optionally, the executing main body may further perform the following steps:
the method comprises the steps of firstly, responding to the fact that the state of the lane line meets a second preset state condition, determining the state of a historical lane line corresponding to the road image lane line, and carrying out coordinate conversion processing on the road image lane line to obtain a converted road image lane line group. The historical lane line state may be a lane line state of a frame previous to a dotted lane line corresponding to the lane line of the road image. Second, the coordinate transformation may be a transformation of the road image lane lines from the image coordinate system to the vehicle coordinate system.
And secondly, storing the lane line sampling points corresponding to the lane lines of the converted lane images as historical lane line sampling points into the lane line container for calling when the state of the lane lines meets a first preset state condition. Here, one lane line container may correspond to each broken lane line.
And thirdly, in response to the fact that the historical lane line state meets the first preset state condition, emptying the lane line container. The history lane line state meeting the first preset state condition can represent that no broken line lane line exists at the nearest part in the road image. Here, if the state of the lane line in the previous frame indicates that no dashed lane line exists in the nearest portion of the road image, and the state of the lane line in the current frame indicates that a dashed lane line exists in the nearest portion of the road image, it may indicate that the historical lane line sampling point stored in the lane line container is not needed from the current frame. Therefore, each of the historical lane line sampling points in the lane line container described above can be cleared. Therefore, the method can be used for recording the lane line of the road image starting from the current frame for the next time when the state representation of the lane line does not have the dotted lane line at the nearest position in the road image. Thus, it is possible to avoid the recognition processing step of the portion where the broken-line lane line does not exist, using the road image lane line with a long time interval. Further, it can be used to improve the accuracy of the lane line recognition result.
The above embodiments of the present disclosure have the following advantages: by the method for identifying the dashed lane line according to some embodiments of the present disclosure, the accuracy of the identified dashed lane line can be improved. Specifically, the reason why the accuracy of the identified broken-line lane line is reduced is that: if the position closest to the current vehicle in the road image is a blank position between the broken-line lane lines, the extracted broken-line lane line has a high uncertainty at the blank position, that is, there is a case where there is a left or right deviation. Based on this, the dashed-line lane line recognition method of some embodiments of the present disclosure, first, acquires a road image, wherein the road image is captured by a vehicle front view camera. Secondly, extracting lane lines of the road image to obtain a road image lane line group, wherein the road image lane lines in the road image lane line group are dotted lane lines of a lane where the current vehicle is located, and each road image lane line in the road image lane line group corresponds to one dotted lane line. Here, by extracting each virtual lane line, it is possible to use for the virtual lane line identification. Then, for each road image lane line in the above-mentioned road image lane line group, the following identification processing steps are performed: firstly, the state of the lane line corresponding to the lane line of the road image is determined. By determining the lane line status, it can be used to determine whether a virtual lane line exists in the closest vicinity in the image field of view of the road image. And secondly, in response to the fact that the state of the lane line meets a first preset state condition, performing dotted line compensation processing on the lane line of the road image based on a historical lane line sampling point group included by a preset lane line container to generate compensated lane line information, wherein each historical lane line sampling point in the historical lane line sampling point group included by the lane line container is a dotted line lane line sampling point identified by a historical frame. Here, the first preset state condition may be used to select a state representing a state of a lane line in a case where a virtual lane line does not exist in the closest vicinity in the image field of view of the road image. This makes it possible to compensate for the virtual lane line closest to the image field of view by the dotted line compensation process. Also because of the introduction of the lane line container, the dashed lane lines identified using the history frame can be used to participate in the dashed line compensation process. Therefore, the method can be used for replacing the nearest virtual lane line in the image visual field, and the situation that the identified lane line deviates leftwards or rightwards due to the fact that the position where the virtual lane line does not exist is directly predicted through a network model is avoided. Thus, the accuracy of the identified dashed lane lines can be improved. And thirdly, determining the compensated lane line information as a lane line identification result. Therefore, the accuracy of the lane line identification result is improved.
With further reference to fig. 2, as an implementation of the methods shown in the above figures, the present disclosure provides some embodiments of a dashed lane line identification apparatus, which correspond to those shown in fig. 1, and which may be applied in various electronic devices in particular.
As shown in fig. 2, the dashed lane line recognition apparatus 200 of some embodiments includes: an acquisition unit 201, an extraction unit 202, and a recognition processing unit 203. Wherein the acquisition unit 201 is configured to acquire a road image, wherein the road image is captured by a vehicle front-view camera; an extracting unit 202, configured to perform lane line extraction on the road image to obtain a road image lane line group, where a road image lane line in the road image lane line group is a dotted lane line of a lane where a current vehicle is located, and each road image lane line in the road image lane line group corresponds to a dotted lane line; a recognition processing unit 203 configured to perform, for each road image lane line in the above-described road image lane line group, the following recognition processing steps: determining the state of the lane line corresponding to the lane line of the road image; in response to determining that the state of the lane line meets a first preset state condition, performing dotted line compensation processing on the lane line of the road image based on a historical lane line sampling point group included by a preset lane line container to generate compensated lane line information, wherein the historical lane line sampling points in the historical lane line sampling point group included by the lane line container are dotted line lane line sampling points identified by a historical frame; and determining the compensated lane line information as a dotted lane line identification result.
It will be understood that the units described in the apparatus 200 correspond to the various steps in the method described with reference to fig. 1. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 200 and the units included therein, and are not described herein again.
Referring now to fig. 3, a block diagram of an electronic device 300 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 3, the electronic device 300 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 301 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage means 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data necessary for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Generally, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 308 including, for example, magnetic tape, hard disk, etc.; and a communication device 309. The communication means 309 may allow the electronic device 300 to communicate wirelessly or by wire with other devices to exchange data. While fig. 3 illustrates an electronic device 300 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 3 may represent one device or may represent multiple devices, as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 309, or installed from the storage device 308, or installed from the ROM 302. The computer program, when executed by the processing apparatus 301, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the apparatus described above; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a road image, wherein the road image is shot by a vehicle front-view camera; extracting lane lines of the road image to obtain a road image lane line group, wherein the road image lane lines in the road image lane line group are dotted lane lines of a lane where a current vehicle is located, and each road image lane line in the road image lane line group corresponds to one dotted lane line; for each road image lane line in the road image lane line group, performing the following identification processing steps: determining the state of the lane line corresponding to the lane line of the road image; performing dotted line compensation processing on the road image lane based on a historical lane line sampling point group included by a preset lane line container in response to the fact that the state of the lane line meets a first preset state condition, so as to generate compensated lane line information, wherein each historical lane line sampling point in the historical lane line sampling point group included by the lane line container is a dotted line lane line sampling point identified by a historical frame; and determining the compensated lane line information as a dotted lane line identification result.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, an extraction unit, and an identification processing unit. Here, the names of these units do not constitute a limitation to the unit itself in some cases, and for example, the acquisition unit may also be described as a "unit that acquires a road image".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (10)

1. A method for identifying a dashed lane line comprises the following steps:
acquiring a road image, wherein the road image is captured by a vehicle front-view camera;
extracting lane lines of the road image to obtain a road image lane line group, wherein the road image lane lines in the road image lane line group are dotted lane lines of a lane where a current vehicle is located, and each road image lane line in the road image lane line group corresponds to one dotted lane line;
for each road image lane line in the road image lane line group, performing the following identification processing steps:
determining a lane line state corresponding to the lane line of the road image;
in response to determining that the state of the lane line meets a first preset state condition, performing dotted line compensation processing on the lane line of the road image based on a historical lane line sampling point group included by a preset lane line container to generate compensated lane line information, wherein each historical lane line sampling point in the historical lane line sampling point group included by the lane line container is a dotted line lane line sampling point identified by a historical frame;
and determining the compensated lane line information as a dotted lane line identification result.
2. The method of claim 1, wherein the method further comprises:
determining the road image lane line as a dotted line lane line recognition result in response to determining that the lane line state satisfies a second preset state condition;
and sending the determined dotted lane line identification result to a display terminal for displaying.
3. The method of claim 2, wherein the method further comprises:
in response to the fact that the lane line state meets a second preset state condition, determining a historical lane line state corresponding to the road image lane line, and performing coordinate conversion processing on the road image lane line to obtain a converted road image lane line;
taking the lane line sampling points corresponding to the lane lines of the converted road image as historical lane line sampling points to be stored in the lane line container for calling when the state of the lane lines meets a first preset state condition;
and in response to determining that the historical lane line state meets the first preset state condition, emptying the lane line container.
4. The method of claim 1, wherein the extracting the lane lines of the road image to obtain the lane line group of the road image comprises:
extracting lane line sampling points of the road image to obtain a lane line sampling point sequence group, wherein each lane line sampling point sequence in the lane line sampling point sequence group corresponds to a dotted lane line;
and fitting each lane line sampling point in each lane line sampling point sequence in the lane line sampling point sequence group to generate a road image lane line so as to obtain a road image lane line group.
5. The method of claim 4, wherein the determining the lane status corresponding to the road image lane comprises:
and in response to the fact that the road image lane line meets the preset lane line position condition, determining a preset lane line approaching identification as a lane line state corresponding to the road image lane line.
6. The method of claim 5, wherein the determining a lane line state corresponding to the road image lane line further comprises:
and in response to determining that the road image lane line does not meet the preset lane line position condition, determining a preset lane line non-approaching mark as a lane line state corresponding to the road image lane line.
7. The method of claim 6, wherein the performing a dotted line compensation process on the road image lane line based on the set of historical lane line sampling points included in the preset lane line container to generate compensated lane line information comprises:
performing coordinate conversion on each historical lane line sampling point in a historical lane line sampling point group included in the lane line container to obtain a converted sampling point sequence group, wherein each converted sampling point in the converted sampling point sequence group is in a vehicle coordinate system of the current vehicle at the current moment;
screening each converted sampling point sequence in the converted sampling point sequence group to obtain a set of screened lane line sampling points;
determining a coordinate scoring value of each screened lane line sampling point in the screened lane line sampling point set based on a timestamp of the current moment and a timestamp corresponding to each screened lane line sampling point in the screened lane line sampling point set to obtain a coordinate scoring value set;
removing the set of the sampling points of the screened lane line where the corresponding sampling points of the screened lane line, of which the coordinate score values do not meet the preset score value conditions, in the set of the sets of the sampling points of the screened lane line, so as to obtain a set of sets of the sampling points of the removed lane line;
and performing curve fitting on each removed lane line sampling point in the removed lane line sampling point set based on the coordinate scoring value set to generate compensated lane line information, wherein the compensated lane line information comprises a dotted lane line.
8. A broken-line lane line recognition apparatus comprising:
an acquisition unit configured to acquire a road image, wherein the road image is captured by a vehicle front-view camera;
the extraction unit is configured to extract lane lines of the road image to obtain road image lane line groups, wherein the road image lane lines in the road image lane line groups are dotted lane lines of a lane where a current vehicle is located, and each road image lane line in the road image lane line groups corresponds to one dotted lane line;
a recognition processing unit configured to perform, for each road image lane line in the road image lane line group, the following recognition processing steps:
determining a lane line state corresponding to the lane line of the road image;
in response to determining that the state of the lane line meets a first preset state condition, performing dotted line compensation processing on the lane line of the road image based on a historical lane line sampling point group included by a preset lane line container to generate compensated lane line information, wherein each historical lane line sampling point in the historical lane line sampling point group included by the lane line container is a dotted line lane line sampling point identified by a historical frame;
and determining the compensated lane line information as a dotted lane line identification result.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-7.
CN202211000927.6A 2022-08-19 2022-08-19 Dotted lane line identification method and device, electronic equipment and computer readable medium Pending CN115273012A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211000927.6A CN115273012A (en) 2022-08-19 2022-08-19 Dotted lane line identification method and device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211000927.6A CN115273012A (en) 2022-08-19 2022-08-19 Dotted lane line identification method and device, electronic equipment and computer readable medium

Publications (1)

Publication Number Publication Date
CN115273012A true CN115273012A (en) 2022-11-01

Family

ID=83753461

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211000927.6A Pending CN115273012A (en) 2022-08-19 2022-08-19 Dotted lane line identification method and device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN115273012A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115546766A (en) * 2022-11-30 2022-12-30 广汽埃安新能源汽车股份有限公司 Lane line generation method, lane line generation device, electronic device, and computer-readable medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115546766A (en) * 2022-11-30 2022-12-30 广汽埃安新能源汽车股份有限公司 Lane line generation method, lane line generation device, electronic device, and computer-readable medium
CN115546766B (en) * 2022-11-30 2023-04-07 广汽埃安新能源汽车股份有限公司 Lane line generation method, lane line generation device, electronic device, and computer-readable medium

Similar Documents

Publication Publication Date Title
CN112348029B (en) Local map adjusting method, device, equipment and computer readable medium
CN114399589B (en) Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN113607185B (en) Lane line information display method, lane line information display device, electronic device, and computer-readable medium
CN114964296B (en) Vehicle driving path planning method, device, equipment and computer readable medium
CN115257727A (en) Obstacle information fusion method and device, electronic equipment and computer readable medium
CN115540894B (en) Vehicle trajectory planning method and device, electronic equipment and computer readable medium
CN115326099A (en) Local path planning method and device, electronic equipment and computer readable medium
CN115272182B (en) Lane line detection method, lane line detection device, electronic equipment and computer readable medium
CN115273012A (en) Dotted lane line identification method and device, electronic equipment and computer readable medium
CN114894205A (en) Three-dimensional lane line information generation method, device, equipment and computer readable medium
CN111126159A (en) Method, apparatus, electronic device, and medium for tracking pedestrian in real time
CN111586295B (en) Image generation method and device and electronic equipment
CN115326079B (en) Vehicle lane level positioning method, device, equipment and computer readable medium
CN113780247B (en) Traffic light detection method and device, electronic equipment and computer readable medium
CN113808134B (en) Oil tank layout information generation method, oil tank layout information generation device, electronic apparatus, and medium
CN115393826A (en) Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN115049624A (en) Method, device, equipment, medium and product for sending early warning information of highway cracks
CN111950238B (en) Automatic driving fault scoring table generation method and device and electronic equipment
CN114119973A (en) Spatial distance prediction method and system based on image semantic segmentation network
CN113568997A (en) Point cloud map updating method and device, electronic equipment and computer readable medium
CN114863025B (en) Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN116430338B (en) Method, system and equipment for tracking moving target
CN114863026B (en) Three-dimensional lane line information generation method, device, equipment and computer readable medium
CN115616560B (en) Vehicle obstacle avoidance method and device, electronic equipment and computer readable medium
CN113392816B (en) Pavement disease detection method, device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination