CN113378719A - Lane line recognition method and device, computer equipment and storage medium - Google Patents

Lane line recognition method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113378719A
CN113378719A CN202110656597.5A CN202110656597A CN113378719A CN 113378719 A CN113378719 A CN 113378719A CN 202110656597 A CN202110656597 A CN 202110656597A CN 113378719 A CN113378719 A CN 113378719A
Authority
CN
China
Prior art keywords
lane
image
line
lane line
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110656597.5A
Other languages
Chinese (zh)
Other versions
CN113378719B (en
Inventor
许杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qingwei Rufeng Technology Co ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202110656597.5A priority Critical patent/CN113378719B/en
Publication of CN113378719A publication Critical patent/CN113378719A/en
Application granted granted Critical
Publication of CN113378719B publication Critical patent/CN113378719B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application relates to a method and a device for identifying lane lines, computer equipment and a storage medium. The method for identifying the lane line comprises the following steps: acquiring a first image and a second image of the surrounding environment; recognizing lane line information from the first image to obtain a first lane line; fusing the plurality of second images to obtain a third image; the third image comprises at least one congested lane, and the number of vehicles on the congested lane is larger than a preset value; obtaining a lane center line of at least one lane based on the third image; and calibrating the first lane line by using the lane central line to obtain the lane line. The first lane line is identified through the first image, the plurality of second images are fused to obtain a third image, the obtained third image comprises enough information of running of vehicles on a lane, and as enough second images can be collected, enough vehicle running samples can be obtained, and the lane center line can be obtained as accurately as possible, so that more accurate lane lines can be obtained.

Description

Lane line recognition method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of automatic driving, and in particular, to a lane line identification method, apparatus, computer device, and storage medium.
Background
At present, unmanned driving refers to a technology that enables an automobile to normally run under the condition that no one operates the automobile, and the technology is mature day by day, and because unmanned vehicles can save more human resources, the technology is also more and more emphasized by people. The unmanned vehicle is positioned and environment-sensed by means of components such as a GPS (global positioning system), a radar and a camera, so that the current position and the surrounding environment condition are determined, and the vehicle is controlled by a processor in the unmanned vehicle based on the information, so that the vehicle can run normally and stably.
In a specific environment sensing process, the unmanned vehicle needs to identify a surrounding lane line so as to control the vehicle to travel in a lane. In the prior art, a camera of an unmanned vehicle acquires a picture of a surrounding road, and then the picture is subjected to view angle conversion, and lane elements in the picture are detected, so that a lane is determined. However, the lane line is identified only by acquiring the image through the vehicle-end camera, because the real-time image contains limited information, for example, the real-time image may be covered by other vehicles, such as sand, snow and ice, so as to block the lane line, and the accuracy of the identified lane line is not high.
Disclosure of Invention
In view of the above, it is necessary to provide a lane line recognition method, apparatus, computer device, and storage medium for solving the problems of the conventional lane line recognition method using a vehicle-mounted camera and low lane line recognition accuracy.
The first aspect of the present application provides a lane line identification method, applied to an automatic driving assistance device, including:
acquiring a first image and a second image of the surrounding environment;
recognizing lane line information from the first image to obtain a first lane line;
fusing the plurality of second images to obtain a third image; the third image comprises at least one congested lane, and the number of vehicles on the congested lane is larger than a preset value;
obtaining a lane center line of at least one lane based on the third image;
and calibrating the first lane line by using the lane central line to obtain the lane line.
In one embodiment, the step of fusing the plurality of second images to obtain a third image specifically includes:
determining a reference image from the plurality of second images and determining a reference;
identifying and extracting all vehicles in the second image except the reference image;
and on the basis of the reference standard, the extracted vehicle is merged into the reference image to obtain a third image.
In one embodiment, the step of calibrating the first lane line by using the lane center line to obtain the lane line specifically includes:
determining first lane lines on two sides corresponding to the center line of each lane;
acquiring parameters of an image sensor, wherein the parameters at least comprise zooming;
based on the parameters of the image sensor, the lane central line is superposed and fused with the first lane lines on the two sides respectively to obtain the presenting lane line.
The second aspect of the present application provides a method for identifying a lane line, which is applied to an automatic driving terminal, and includes:
acquiring a first historical image and a second historical image of the surrounding environment;
identifying lane line information from the first historical image to obtain a first lane line;
fusing the plurality of second historical images to obtain a composite image; the composite image comprises at least one congested lane, and the number of vehicles on the congested lane is larger than a preset value;
obtaining lane center lines of at least one lane based on the composite image;
and calibrating the first lane line by using the lane central line to obtain a second lane line.
In one embodiment, the step of fusing the plurality of second history images to obtain a composite image specifically includes:
determining a reference image from the second history image and determining a reference;
identifying and extracting all vehicles in the second history image except the reference image;
and on the basis of the reference standard, the extracted vehicle is merged into the reference image to obtain a composite image.
In one embodiment, the step of calibrating the first lane line by using the lane center line to obtain the second lane line specifically includes:
determining first lane lines on two sides corresponding to the center line of each lane;
acquiring a first parameter of an image sensor for shooting a second history image, wherein the first parameter at least comprises zooming;
and on the basis of the first parameters, overlapping and fusing the lane central lines and the lane lines on the two sides respectively to obtain second lane lines.
In one embodiment, the method further comprises the following steps:
acquiring a real-time image of the surrounding environment, and identifying a third lane line from the real-time image; the real-time image is shot by an image sensor of the automatic driving terminal;
and combining the second lane line and the third lane line to obtain an output lane line.
A third aspect of the present application provides a lane line identification apparatus including an acquisition section, an image identification section, an image fusion section, a center line extraction section, and a calibration section, wherein,
acquisition means for a first image and a second image;
the image identification component is used for identifying lane line information from the first image to obtain a first lane line;
the image fusion component is used for fusing the plurality of second images to obtain a third image; the third image comprises at least one congested lane, and the number of vehicles on the congested lane is larger than a preset value;
center line extraction means for obtaining a lane center line of at least one lane based on the third image;
and the calibration component is used for calibrating the first lane line by using the lane center line to obtain the lane line.
According to the method and the device for identifying the lane line, the first lane line is identified through the first image, meanwhile, the plurality of second images are fused to obtain the third image, the obtained third image comprises enough information of vehicles running on the lane, so that the lane center line can be obtained according to the vehicle running information, the lane center line is used for calibrating the first lane line, and the finally presented lane line is obtained. As enough second images can be collected, enough vehicle running samples can be obtained in the past, and the lane central line which is as accurate as possible can be obtained, so that more accurate lane lines can be obtained.
A third aspect of the present application provides a computer device comprising: a processor; a memory for storing executable instructions of the processor; the processor is configured to perform the steps of any of the methods described above via execution of the executable instructions.
A fourth aspect of the present application provides a machine readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of any of the methods described above.
Drawings
FIG. 1 is a schematic illustration of a traffic scene according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a lane line identification method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a lane line identification method according to another embodiment of the present application;
fig. 4 is a schematic image fusion diagram of a lane line identification method according to an embodiment of the present application;
fig. 5 is a schematic flowchart of a lane line identification method according to another embodiment of the present application;
fig. 6 is a schematic flowchart of a lane line identification method according to another embodiment of the present application;
fig. 7 is a schematic flowchart of a lane line identification method according to another embodiment of the present application;
fig. 8 is a schematic flowchart of a lane line identification method according to another embodiment of the present application;
fig. 9 is a schematic flowchart of a lane line identification method according to another embodiment of the present application;
fig. 10 is a schematic structural diagram of a frame of a lane line identification device according to an embodiment of the present application;
fig. 11 is a schematic view of a frame structure of a lane line identification device according to another embodiment of the present application;
fig. 12 is a schematic view of a frame structure of a lane line identification device according to still another embodiment of the present application.
Detailed Description
To facilitate an understanding of the present application, the present application will now be described more fully with reference to the accompanying drawings. Preferred embodiments of the present application are given in the accompanying drawings. This application may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Referring to fig. 1, which schematically illustrates a traffic scene according to an embodiment of the present application, fig. 1 shows a road 100, where the road 100 is a multi-lane road, although the road 100 in the figure is 4 lanes, in other examples, the road may be a dual lane, a triple lane, a five lane or more. The road 100 may be a one-way driving road, i.e. the vehicle driving direction is the same for all lanes. The road 100 may also be a bidirectional driving road, i.e. the driving direction of vehicles in a part of lanes is a first direction, and the driving direction of vehicles in the remaining part of lanes is a second direction, and the first direction is opposite to the second direction.
The roadside is provided with an automatic driving assistance apparatus 200, the automatic driving assistance apparatus 200 includes at least an image sensor 210, and captures an image by the image sensor 210. The image sensor 210 captures an image including a portion of the roadway 100. For example, in the embodiment shown in fig. 1, the automatic driving assistance apparatus 200 is disposed directly above the road 100, so that a better imaging angle of view can be obtained. For example, the automated driving assistance apparatus 200 may be fixed to the stand 300 by a stand provided on the roadside so that the automated driving assistance apparatus 200 may be suspended right above the road 100. Of course, in other embodiments, the automatic driving assistance apparatus 210 may be provided at the roadside.
The automated driving assistance apparatus 200 may be provided in plurality along the extending direction of the road, each automated driving assistance apparatus 200 corresponding to a part of the section. The different automated driving assistance apparatuses 200 may communicate with each other to achieve data intercommunication. For example, any two of the automatic driving assistance apparatuses 200 realize communication through a network. Alternatively, adjacent automated driving assistance apparatuses 200 may communicate with each other along the road extending direction, or automated driving assistance apparatuses 200 within a certain area may communicate with each other.
A communication connection may be established between the autonomous driving assistance apparatus 200 and the autonomous driving terminal to transmit data from the autonomous driving assistance apparatus to the autonomous driving terminal to assist lane line recognition of the autonomous driving terminal.
Referring to fig. 2, a flowchart of a lane line identification method according to an embodiment of the present application is exemplarily shown. The lane line identification method according to the embodiment of the present application is performed by an automatic driving assistance apparatus, for example, applied to the automatic driving assistance apparatus in fig. 1. The automatic driving terminal receives the lane line information sent by the automatic driving auxiliary equipment, analyzes the lane line and is used for automatic driving/auxiliary driving of the automatic driving terminal.
The method for identifying a lane line as shown in FIG. 2 may include steps S102 to S110, which will be described in detail below.
S102: a first image and a second image of a surrounding environment are acquired.
The automatic driving assistance apparatus includes an image sensor that captures image information within a shooting range when the automatic driving assistance apparatus starts operating. The image sensors may be fixed angle camera, with the images captured by the image sensors having the same camera angle.
The first image is an image shot by vehicles on the road less than a preset value. The first image is used for the automatic driving assistance device to recognize the lane line as clearly as possible. Based on the purpose of the first image, in the captured first image, the fewer the vehicles on the road, the better. Because the vehicles on the road can shield the lane lines, the fewer the vehicles on the road are, the less the shielding of the lane lines is, and the clearer lane line identification is facilitated. In some of these embodiments, the first image is an image taken when there is no vehicle on the road. Of course, there may be vehicles on the road, but there are few vehicles. For example, in the first image, the number of vehicles on the road is equal to or less than three.
The second image is an image of at least one vehicle running on the road. The second image is used for correcting the lane line identified from the first image by means of the vehicle information so as to avoid identifying a wrong lane line from the first image, and when the identified lane line is incomplete, the lane line is supplemented completely. The vehicles in the second image may be as many as possible based on the purpose of the second image. However, in order to be suitable for remote sections, the second image may also comprise only one vehicle.
The image sensor captures an original image according to a preset rule when capturing an image, without distinguishing the first image from the second image at the time of photographing. The first image and the second image are screened from the original image by the automatic driving assistance apparatus. For example, the image sensor may be set to capture one raw image at preset intervals. In a specific embodiment, the image sensor captures one raw image every 10 seconds.
After the image sensor shoots an original image, a first image and a second image are obtained by screening from the original image. For example, the automatic driving assistance apparatus may be provided with an image filtering section to which an original image captured by the image sensor is transmitted for filtering classification. The image screening component screens the original image to obtain a first image and a second image. For example, the original image without vehicles on the road is screened out as the first image, and the original image with the vehicles on the road more than 5 is screened out as the second image.
Since the change of the lane line of the road is not random, the first image may not be updated in real time, and may be updated at a preset time interval. For example, the first image is updated once a day, and the first image is updated at three points in the morning of each day when there are few vehicles on the road. However, the present application is not limited to this, and may update once an hour or update in real time.
S104: and identifying lane line information from the first image to obtain a first lane line.
And after the first image is obtained, carrying out image recognition on the first image, and recognizing lane line information from the first image to obtain the first lane line. For example, after the gray processing is performed on the first image, based on a gray identification technology, the gray value of each pixel point is obtained, and the pixel points with the gray values within the preset interval are identified. For another example, the first lane line may be identified from the first image by a trained neural network, and the first image may be input to the neural network to automatically identify the first lane line.
The automatic driving assistance device may be configured with an image recognition component, which may be configured with a grayscale recognition algorithm or a neural network, to recognize the first lane line from the first image.
S106: and fusing the plurality of second images to obtain a third image, wherein the third image comprises at least one congested lane, and the number of vehicles on the congested lane is greater than a preset value.
The accuracy of the first lane line identified from the first image may not be high. For example, when identifying a first lane line based on grayscale recognition, if there happens to be a white vehicle or a yellow vehicle in the first image, the positions where the white vehicle and the yellow vehicle are also located may also be identified as the first lane line, so that a redundant, erroneous first lane line is identified. Further alternatively, the lane line is worn due to frequent rolling of the lane line by the wheels, and the lane line is easily recognized, so that the first recognized lane line is incomplete. Therefore, the first lane line needs to be corrected using the second image.
When the first lane line is corrected by the second image, that is, the lane lines on both sides of the lane are corrected by the vehicle information on the lane, if the second image is directly applied to the correction of the first lane line, it is difficult to obtain the second image meeting the requirement. Therefore, a plurality of second images can be fused to obtain a third image, and the third image is used for correcting the first lane line. The third image obtained by fusion at least comprises one congested lane, and the congested lane contains enough vehicle information so as to correct the first lane line of the at least one lane.
Referring to fig. 3, in one or more embodiments, S106: fusing the plurality of second images to obtain a third image, which specifically comprises:
s162: determining a reference image from the plurality of second images and determining a reference;
s164: identifying and extracting all vehicles in the second image except the reference image;
s166: and on the basis of the reference standard, the extracted vehicle is merged into the reference image to obtain a third image.
The automatic driving assistance apparatus may be provided with an image fusion section that delivers the second image to the image fusion section for image fusion after the second image is screened out by the image screening section. In image fusion, one reference image is first determined from a plurality of second images. Since the shooting angle of the image sensor is fixed, the determination of the reference image may be arbitrary, and one image may be arbitrarily selected from the plurality of second images as the reference image. Or selecting one of the second images with the largest number of vehicles on the road as the reference image. Then, a reference datum is determined, and when the plurality of second images are fused, fusion is carried out based on the reference datum. The reference standard is a fixed reference object which is not easy to move and exists in each second image. For example, since the edge of the way is stationary, the edge of the way may be selected as the reference. Of course, a street light pole or the like may be selected as the reference.
After the reference datum is determined, image fusion can be carried out, and the position of the vehicle in each second image and the position of the vehicle relative to the reference datum are sequentially identified. The vehicle is extracted from the second image. Then, the vehicle extracted from the second image is merged into the reference image with reference to the reference standard, and the position of the vehicle with respect to the reference standard is not changed after the vehicle is merged into the reference image. After enough second images are fused, a congested lane can be obtained, so that enough vehicle information can be obtained, and a third image can be obtained.
Referring to fig. 4, in the second images 201 and 206, there are no vehicles in some lanes and there are vehicles in some lanes. For example, in the second image 201, there are vehicles on the first lane a and the third lane C, and there are no vehicles on the second lane B and the fourth lane D; in the second image 202, there are vehicles in the first lane a and the second lane B, and there are no vehicles in the third lane C and the fourth lane D; in the second image 205, there is only a vehicle in the fourth lane D, and there is no vehicle in the first lane a, the second lane B, and the third lane C. When the images are fused, the positions of the vehicles in the respective second images relative to the reference datum are determined, and then the vehicles are extracted and fused to the positions in the datum images relative to the same positions of the reference datum. In the third image 210, all the vehicles in the second images 201 and 206 are superimposed on the reference image, so that the third image 210 includes all the vehicles in the second images 201 to 206, and the positions of all the vehicles relative to the reference are unchanged. Based on this, as long as the second images participating in the fusion are sufficiently large, vehicles can be fused at each position of each lane. When there are enough vehicles in the lane, it is considered a congested lane. For example, vehicles in congested lanes, at least partially overlap.
In one or more embodiments, when the plurality of second images are fused, the second images within a preset time may be fused once. For example, the second image captured within 10 minutes is once fused to obtain a third image.
In one or more embodiments, the third image may undergo multiple fusions. For example, as an alternative embodiment, the second image captured within 10 minutes is fused once to obtain an intermediate image, and then the plurality of intermediate images are fused one or more times to obtain a third image. As another optional implementation, the third image may be further fused with the second image, so as to continuously iterate to obtain a new third image to cover the original third image, and thus, the third image may be updated in real time.
Since the same vehicle may appear in different second images, the same vehicle may be used for multiple fusions. For example, if the image sensor takes an original image every 10 seconds, the pictures taken by the same vehicle at different times appear in different second images, so that the second images may be merged in the same batch or in different batches.
S108: and obtaining a lane central line of at least one lane based on the third image.
In order to calibrate the first lane line using the vehicle information, the vehicle information needs to be processed first to obtain enough vehicle information. And the third image obtained by fusing the plurality of second images comprises at least one congested lane. And if the jammed lane contains enough vehicle information, obtaining the lane center line of the jammed lane according to the vehicles in the jammed lane.
As previously described, when the fused second image is sufficiently numerous, the vehicles within the congested lane may be superimposed on each other such that the congested lane in the third image may actually contain far more vehicles than can actually travel on the road. In a specific embodiment, obtaining lane center lines of at least one lane based on the third image may specifically be:
and determining the central point of each vehicle in the third image, and performing curve fitting on the central points of the vehicles in the same lane to obtain the lane central line.
It will be appreciated that the more vehicles that are stacked within a lane, the closer the distance between the center points of the vehicles may be, and the better the lane centerline that is ultimately fit. In the limit state, the vehicle center points can be connected into a curve through the repeatedly fused third images for many times, the connected curve can be used as a lane center line, when a large number of irregularly running vehicles exist in the lane, the curve formed by connecting the vehicle center points has enough width, and the central axis of the curve formed by connecting the vehicle center points is the lane center line.
S110: and calibrating the first lane line by using the lane central line to obtain the lane line.
Generally, the vehicle runs at the center of the lane, and the width of the lane is adapted to the width of the vehicle, so that the center point of the vehicle and the center line of the lane do not deviate too much and are relatively close to the center line of the lane. Based on this, the first lane may be calibrated using the resulting lane center line. Specifically, after the lane center line is obtained, the identified first lane line is associated with the lane center line of each lane, and the lane center line of each lane is positioned between the first lane lines on both sides of each lane. According to the ideal state that the lane center line is located at the center of the first lane lines on both sides of the lane, the wrongly recognized first lane lines can be eliminated (for example, the white or yellow vehicles are wrongly recognized as the first lane lines), and the missing first lane lines can be supplemented.
Referring to fig. 5, in one or more embodiments, S110: utilize the lane central line to calibrate first lane line, obtain the lane line, specifically include:
s112: determining first lane lines on two sides corresponding to the center line of each lane;
s114: acquiring parameters of an image sensor, wherein the parameters at least comprise zooming;
s116: based on the parameters of the image sensor, the lane central line is superposed and fused with the first lane lines on the two sides respectively to obtain the presenting lane line.
When the first lane line is calibrated by using the lane center line, the first lane lines on two sides of the lane center line are firstly determined, which can be realized by reference, because the first image and the second image have the same shooting visual angle, the lane center line can be fused into the image of the identified first lane line according to the same reference, and thus the first lane lines on two sides of the lane center line are determined. After the determination, the first lane lines on both sides can be calibrated according to the shape of the lane center line.
Calibration may be performed in conjunction with parameters of the image sensor, including at least zoom. The farther the distance between the object and the image sensor is in the picture taken by the image sensor, the smaller the imaged object is. Therefore, when the first lane lines on both sides of the lane center line are calibrated by using only the shape, the more distant the specific image sensor is, the larger the error is. Therefore, the image sensor can be modeled after being reversely analyzed by combining the zooming parameters, and the road is stereoscopically or planarly (stereoscopically, for example, the lane center line and the first lane line are fused into a three-dimensional map). And superposing the lane central lines with the first lane lines on the two sides respectively on the central lines of the first lane lines on the two sides according to the lane central lines, wherein the part which is not superposed is the first lane line which is identified by mistake, and meanwhile, the superposed images can be used for filling up the missing first lane lines, so that the presented lane lines can be obtained and are transmitted to the automatic driving terminal to be presented to the vehicle owner of the automatic driving terminal.
According to the lane line identification method, the first lane line is identified through the first image, meanwhile, the plurality of second images are fused to obtain the third image, the obtained third image comprises enough information of driving of vehicles on a lane, therefore, a lane center line can be obtained according to the driving information of the vehicles, the lane center line is used for calibrating the first lane line, and the finally presented lane line is obtained. As enough second images can be collected, enough vehicle running samples can be obtained in the past, and the lane central line which is as accurate as possible can be obtained, so that more accurate lane lines can be obtained.
Referring to fig. 6, the present application further provides a lane line identification method applied to an automatic driving terminal, for example, the automatic driving terminal shown in fig. 1. The lane line recognition method shown in FIG. 6 may include steps S202 to S214, which will be described in detail below.
S202: acquiring a first historical image and a second historical image of the surrounding environment;
the first historical image is an image which is shot by an image sensor fixedly arranged in the surrounding environment when the number of vehicles on the road is less than a preset value. The second history image is an image captured by an image sensor fixedly installed in the surrounding environment when at least one vehicle is traveling on the road. The first history image is the first image in step 102, the second history image is the second image in step 102, the automatic driving assistance device may store the captured image locally after capturing the image, and when the automatic driving terminal travels through the automatic driving assistance device, the automatic driving assistance device transmits the captured image to the automatic driving terminal, and the automatic driving assistance device receives the first history image and the second history image.
The process of filtering the first history image and the second history image from the images captured by the automatic driving assistance apparatus may be performed at the automatic driving assistance apparatus or at the automatic driving terminal. After the automatic driving assistance device finishes screening the first image and the second image, namely the automatic driving assistance device screens the first image and the second image, the first image and the second image obtained through screening are sent to a passing automatic driving terminal, and the automatic driving terminal obtains the first history image and the second history image. And when the automatic driving terminal finishes, the automatic driving auxiliary equipment sends the shot original picture to the automatic driving terminal, and the automatic driving terminal screens out the first history image and the second history image which meet the conditions. The detailed screening process is discussed in detail in step 102 and is not repeated here.
S204: identifying lane line information from the first historical image to obtain a first lane line;
s206: fusing the plurality of second historical images to obtain a composite image; the composite image comprises at least one congested lane, and the number of vehicles on the congested lane is larger than a preset value;
s208: obtaining lane center lines of at least one lane based on the composite image;
s210: and calibrating the first lane line by using the lane central line to obtain a second lane line.
Steps S204 to S210 are basically the same as steps S104 to S110, except that the execution process is completed at the automatic driving terminal by the lane line recognition device provided at the automatic driving terminal, and in step S206, the synthesized image corresponds to the third image in step S106.
Referring to fig. 7, in one or more embodiments, S206: fusing the plurality of second history images to obtain a composite image, which specifically comprises:
s262: determining a reference image from the second history image and determining a reference;
s264: identifying and extracting all vehicles in the second history image except the reference image;
s266: and on the basis of the reference standard, the extracted vehicle is merged into the reference image to obtain a composite image.
The execution processes of the steps S262 to S266 are basically the same as the execution processes of the steps S162 to S166, and are completed by a lane line recognition device of the automatic driving terminal.
Referring to fig. 8, in one or more embodiments, S210: utilize the lane central line to calibrate first lane line, obtain the second lane line, specifically include:
s212: determining first lane lines on two sides corresponding to the center line of each lane;
s214: acquiring a first parameter of an image sensor for shooting a second history image, wherein the first parameter at least comprises zooming;
s216: and on the basis of the first parameters, overlapping and fusing the lane central lines and the lane lines on the two sides respectively to obtain second lane lines.
The execution processes of steps S212 to S216 are substantially the same as the execution processes of steps S112 to S116, except that: in step S214, the first parameter of the image sensor capturing the second history image cannot be directly obtained by the automatic driving terminal, reverse estimation needs to be performed according to the second history image, after the first parameter is estimated, the second history image is stereoscopically or planarly (for example, fused with a three-dimensional map or a two-dimensional map), and the lane center line is fused into the stereoscopically or planarly second history image. Of course, the first parameter may be acquired at the same time when the first history image and the second history image are acquired, and the automatic driving assistance device may transmit the original image or the first image and the second image at the same time and the first parameter of the image sensor of the automatic driving assistance device side to the automatic driving terminal.
The obtained second lane line may be directly presented to the user, for example, on a display screen in the cabin after the second lane line is merged with the vehicle navigation system or the vehicle map.
Referring to FIG. 9, in one or more embodiments, the method further includes steps 220-230:
s220: acquiring a real-time image of the surrounding environment, and identifying a third lane line from the real-time image; the real-time image is shot by an image sensor of the automatic driving terminal;
the real-time image of the surrounding environment is captured by an image sensor of the automatic driving terminal, and the image of the periphery of the automatic driving terminal is captured in real time when the automatic driving terminal runs on the road. A third lane line is identified from the one or more real-time images in a manner similar to the first lane line identified from the first historical image in step 204.
S230: and combining the second lane line and the third lane line to obtain an output lane line.
When the second lane line is obtained, the second lane line and the third lane line may be combined to obtain an output lane line presented to the user. Specifically, the lane where the automatic driving terminal is located is determined based on the real-time image, and meanwhile, the second lane line is further corrected by combining a third lane line recognized by the automatic driving terminal, so that the second lane line is in accordance with the real-time road condition.
Particularly, the automatic driving terminal can only display lane lines on two sides of a lane where the automatic driving terminal is located, at the moment, the composite image can be screened by combining the real-time image, as long as the lane where the automatic driving terminal is located in the composite image has enough vehicles, the composite image can be used, all lanes are not required to be congested lanes, and therefore data calculation capacity can be reduced, and meanwhile more composite images meeting requirements can be screened.
According to the lane line identification method, the first lane line is identified through the first historical image, meanwhile, the plurality of second historical images are fused to obtain the composite image, the obtained composite image comprises enough information of running of vehicles on the lane, therefore, the center line of the lane can be obtained according to the running information of the vehicles, the lane center line is used for calibrating the first lane lines on the two sides of each lane, and the finally presented lane line is obtained. As enough second historical images can be acquired, enough vehicle driving samples can be acquired, the lane central line which is as accurate as possible can be obtained, and the more accurate lane line can be obtained.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 10, the present application further provides a lane line recognition apparatus 10, which includes an acquisition component 110, an image recognition component 120, an image fusion component 130, a centerline extraction component 140, and a calibration component 150, wherein,
an acquisition component 110 for a first image and a second image;
an image recognition component 120, configured to recognize lane line information from the first image, and obtain a first lane line;
an image fusion unit 130, configured to fuse the plurality of second images to obtain a third image; the third image comprises at least one congested lane, and the number of vehicles on the congested lane is larger than a preset value;
a center line extracting part 140 for obtaining a lane center line of at least one lane based on the third image;
and the calibration component 150 is used for calibrating the first lane line by using the lane center line to obtain the lane line.
In one or more embodiments, the acquisition component 110 can acquire the first image and the second image through its own camera. For example, in step S102, the first image and the second image are acquired by an image sensor on the automatic driving assistance apparatus. The acquisition unit 110 may also receive the first image and the second image transmitted by other electronic devices, for example, in step 202, the vehicle-mounted lane line recognition apparatus receives the first history image and the second history image from the automatic driving assistance device located on the roadside.
Referring to FIG. 11, in one or more embodiments, the image fusion component 130 can include a benchmark determining component 131, a recognition extraction component 133, and a fusion component 135, wherein
A reference determining component 131 for determining a reference image from the plurality of second images and determining a reference;
an identification extraction component 133 for identifying and extracting all vehicles in the second image except the reference image;
and a fusion component 135, configured to fuse the extracted vehicle into the reference image based on the reference datum, so as to obtain a third image.
Referring to fig. 12, in one or more embodiments, the calibration component 150 can include a first determination component 151, a parameter acquisition component 153, and a superposition fusion component 155, wherein,
the first determining component 151 is used for determining first lane lines on two sides corresponding to the center line of each lane;
a parameter acquiring component 153, configured to acquire parameters of the image sensor, where the parameters at least include zooming;
and the superposition fusion component 155 is used for superposing and fusing the lane central line and the first lane lines on the two sides respectively based on the parameters of the image sensor to obtain a presented lane line.
According to the lane line recognition device, the first lane line is recognized through the first image, meanwhile, the plurality of second images are fused to obtain the third image, the obtained third image comprises enough information of running of vehicles on a lane, therefore, a lane center line can be obtained according to the vehicle running information, the lane center line is used for calibrating the first lane line, and the finally presented lane line is obtained. As enough second images can be collected, enough vehicle running samples can be obtained in the past, and the lane central line which is as accurate as possible can be obtained, so that more accurate lane lines can be obtained.
An embodiment of the present application further provides a machine-readable storage medium, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the method of any of the above embodiments.
The system/computer device integrated components/modules/units, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments described above can be realized. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable storage medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, etc. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The present application further provides a computer device, comprising: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured to perform the method of any of the above embodiments via execution of the executable instructions.
In the several embodiments provided in this application, it should be understood that the disclosed systems and methods may be implemented in other ways. For example, the system embodiments described above are merely illustrative, and for example, the division of the components is only one logical division, and other divisions may be realized in practice.
In addition, each functional module/component in the embodiments of the present application may be integrated into the same processing module/component, or each functional module/component may exist alone physically, or two or more functional modules/components may be integrated into the same processing module/component. The integrated modules/components can be implemented in the form of hardware, or can be implemented in the form of hardware plus software functional modules/components.
It will be evident to those skilled in the art that the embodiments of the present application are not limited to the details of the foregoing illustrative embodiments, and that the embodiments of the present application can be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the embodiments being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. Several units, modules or means recited in the system, apparatus or terminal claims may also be implemented by one and the same unit, module or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A lane line recognition method applied to an automatic driving assistance apparatus, characterized by comprising:
acquiring a first image and a second image of the surrounding environment;
recognizing lane line information from the first image to obtain a first lane line;
fusing the plurality of second images to obtain a third image; the third image comprises at least one congested lane, and the number of vehicles on the congested lane is larger than a preset value;
obtaining a lane center line of at least one lane based on the third image;
and calibrating the first lane line by using the lane central line to obtain the lane line.
2. The method according to claim 1, wherein the step of fusing the plurality of second images to obtain the third image specifically comprises:
determining a reference image from the plurality of second images and determining a reference;
identifying and extracting all vehicles in the second image except the reference image;
and on the basis of the reference standard, the extracted vehicle is merged into the reference image to obtain a third image.
3. The method according to claim 1, wherein the step of calibrating the first lane line with the lane center line to obtain the lane line comprises:
determining first lane lines on two sides corresponding to the center line of each lane;
acquiring parameters of an image sensor, wherein the parameters at least comprise zooming;
based on the parameters of the image sensor, the lane central line is superposed and fused with the first lane lines on the two sides respectively to obtain the presenting lane line.
4. A method for identifying lane lines is applied to an automatic driving terminal and is characterized by comprising the following steps:
acquiring a first historical image and a second historical image of the surrounding environment;
identifying lane line information from the first historical image to obtain a first lane line;
fusing the plurality of second historical images to obtain a composite image; the composite image comprises at least one congested lane, and the number of vehicles on the congested lane is larger than a preset value;
obtaining lane center lines of at least one lane based on the composite image;
and calibrating the first lane line by using the lane central line to obtain a second lane line.
5. The method according to claim 4, wherein the step of fusing the plurality of second history images to obtain a composite image specifically includes:
determining a reference image from the second history image and determining a reference;
identifying and extracting all vehicles in the second history image except the reference image;
and on the basis of the reference standard, the extracted vehicle is merged into the reference image to obtain a composite image.
6. The method according to claim 4, wherein the step of calibrating the first lane line with the lane center line to obtain the second lane line comprises:
determining first lane lines on two sides corresponding to the center line of each lane;
acquiring a first parameter of an image sensor for shooting a second history image, wherein the first parameter at least comprises zooming;
and on the basis of the first parameters, overlapping and fusing the lane central lines and the lane lines on the two sides respectively to obtain second lane lines.
7. The method of claim 4, further comprising:
acquiring a real-time image of the surrounding environment, and identifying a third lane line from the real-time image; the real-time image is shot by an image sensor of the automatic driving terminal;
and combining the second lane line and the third lane line to obtain an output lane line.
8. A lane line recognition apparatus comprising an acquisition section, an image recognition section, an image fusion section, a center line extraction section, and a calibration section,
acquisition means for a first image and a second image;
the image identification component is used for identifying lane line information from the first image to obtain a first lane line;
the image fusion component is used for fusing the plurality of second images to obtain a third image; the third image comprises at least one congested lane, and the number of vehicles on the congested lane is larger than a preset value;
center line extraction means for obtaining a lane center line of at least one lane based on the third image;
and the calibration component is used for calibrating the first lane line by using the lane center line to obtain the lane line.
9. A computer device, comprising: a processor; a memory for storing executable instructions of the processor; characterized in that the processor is configured to perform the steps of the method of any of claims 1-7 via execution of the executable instructions.
10. A machine readable storage medium, having stored thereon a computer program, the computer program, when being executed by a processor, performing the steps of the method of any one of claims 1 to 7.
CN202110656597.5A 2021-06-11 2021-06-11 Lane line identification method, lane line identification device, computer equipment and storage medium Active CN113378719B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110656597.5A CN113378719B (en) 2021-06-11 2021-06-11 Lane line identification method, lane line identification device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110656597.5A CN113378719B (en) 2021-06-11 2021-06-11 Lane line identification method, lane line identification device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113378719A true CN113378719A (en) 2021-09-10
CN113378719B CN113378719B (en) 2024-04-05

Family

ID=77574188

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110656597.5A Active CN113378719B (en) 2021-06-11 2021-06-11 Lane line identification method, lane line identification device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113378719B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114166238A (en) * 2021-12-06 2022-03-11 北京百度网讯科技有限公司 Lane line identification method and device and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200026282A1 (en) * 2018-07-23 2020-01-23 Baidu Usa Llc Lane/object detection and tracking perception system for autonomous vehicles
CN111213154A (en) * 2019-03-08 2020-05-29 深圳市大疆创新科技有限公司 Lane line detection method, lane line detection equipment, mobile platform and storage medium
CN111783666A (en) * 2020-07-01 2020-10-16 北京计算机技术及应用研究所 Rapid lane line detection method based on continuous video frame corner feature matching
CN112154449A (en) * 2019-09-26 2020-12-29 深圳市大疆创新科技有限公司 Lane line fusion method, lane line fusion device, vehicle, and storage medium
CN112189225A (en) * 2018-06-26 2021-01-05 Sk电信有限公司 Lane line information detection apparatus, method, and computer-readable recording medium storing computer program programmed to execute the method
CN112507852A (en) * 2020-12-02 2021-03-16 上海眼控科技股份有限公司 Lane line identification method, device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112189225A (en) * 2018-06-26 2021-01-05 Sk电信有限公司 Lane line information detection apparatus, method, and computer-readable recording medium storing computer program programmed to execute the method
US20200026282A1 (en) * 2018-07-23 2020-01-23 Baidu Usa Llc Lane/object detection and tracking perception system for autonomous vehicles
CN111213154A (en) * 2019-03-08 2020-05-29 深圳市大疆创新科技有限公司 Lane line detection method, lane line detection equipment, mobile platform and storage medium
CN112154449A (en) * 2019-09-26 2020-12-29 深圳市大疆创新科技有限公司 Lane line fusion method, lane line fusion device, vehicle, and storage medium
CN111783666A (en) * 2020-07-01 2020-10-16 北京计算机技术及应用研究所 Rapid lane line detection method based on continuous video frame corner feature matching
CN112507852A (en) * 2020-12-02 2021-03-16 上海眼控科技股份有限公司 Lane line identification method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114166238A (en) * 2021-12-06 2022-03-11 北京百度网讯科技有限公司 Lane line identification method and device and electronic equipment
CN114166238B (en) * 2021-12-06 2024-02-13 北京百度网讯科技有限公司 Lane line identification method and device and electronic equipment

Also Published As

Publication number Publication date
CN113378719B (en) 2024-04-05

Similar Documents

Publication Publication Date Title
US10817731B2 (en) Image-based pedestrian detection
US10860896B2 (en) FPGA device for image classification
CN108345822B (en) Point cloud data processing method and device
US20180288320A1 (en) Camera Fields of View for Object Detection
US8452053B2 (en) Pixel-based texture-rich clear path detection
US8634593B2 (en) Pixel-based texture-less clear path detection
US20180189574A1 (en) Image Capture Device with Customizable Regions of Interest
DE112020003897T5 (en) SYSTEMS AND METHODS FOR MONITORING LANE CONGESTION
CN111508260A (en) Vehicle parking space detection method, device and system
EP3007099A1 (en) Image recognition system for a vehicle and corresponding method
DE102009050505A1 (en) Clear path detecting method for vehicle i.e. motor vehicle such as car, involves modifying clear path based upon analysis of road geometry data, and utilizing clear path in navigation of vehicle
DE112020002764T5 (en) SYSTEMS AND METHODS FOR VEHICLE NAVIGATION
JP2006208223A (en) Vehicle position recognition device and vehicle position recognition method
CN107273788A (en) The imaging system and vehicle imaging systems of lane detection are performed in vehicle
DE112020002592T5 (en) SYSTEMS AND METHODS FOR VEHICLE NAVIGATION BASED ON IMAGE ANALYSIS
CN114556249A (en) System and method for predicting vehicle trajectory
CN112740225A (en) Method and device for determining road surface elements
CN112835030A (en) Data fusion method and device for obstacle target and intelligent automobile
CN117576652B (en) Road object identification method and device, storage medium and electronic equipment
CN111319560B (en) Information processing system, program, and information processing method
EP3859390A1 (en) Method and system for rendering a representation of an evinronment of a vehicle
CN113378719B (en) Lane line identification method, lane line identification device, computer equipment and storage medium
CN108195359B (en) Method and system for acquiring spatial data
WO2008046458A1 (en) Method and device for capturing the surroundings in motor vehicles with the aid of aerial images
JP2017034638A (en) Image processing system and image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240307

Address after: Unit 901A, 9th Floor, Building AB, Dongsheng Building, No. 8 Zhongguancun East Road, Haidian District, Beijing, 100000

Applicant after: Beijing Qingwei Rufeng Technology Co.,Ltd.

Country or region after: Zhong Guo

Address before: 1020, international culture building, 3039 Shennan Middle Road, Futian District, Shenzhen, Guangdong 518000

Applicant before: Xu Jie

Country or region before: Zhong Guo

GR01 Patent grant
GR01 Patent grant