CN110276293A - Method for detecting lane lines, device, electronic equipment and storage medium - Google Patents

Method for detecting lane lines, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110276293A
CN110276293A CN201910536493.3A CN201910536493A CN110276293A CN 110276293 A CN110276293 A CN 110276293A CN 201910536493 A CN201910536493 A CN 201910536493A CN 110276293 A CN110276293 A CN 110276293A
Authority
CN
China
Prior art keywords
lane
center point
image
grid
lane center
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910536493.3A
Other languages
Chinese (zh)
Other versions
CN110276293B (en
Inventor
潘杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Technology Beijing Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201910536493.3A priority Critical patent/CN110276293B/en
Publication of CN110276293A publication Critical patent/CN110276293A/en
Application granted granted Critical
Publication of CN110276293B publication Critical patent/CN110276293B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application proposes a kind of method for detecting lane lines, device, electronic equipment and storage medium, belongs to computer application technology.Wherein, this method comprises: obtaining image to be detected;Image is inputted into preset target detection model, obtains the detection information of each grid in image, detection information includes: the corresponding lane center point lateral shift of each prediction block, lane center point score, prediction block width adjustment value;Non-maxima suppression processing, the position of each lane center point in acquisition image and corresponding lane width are carried out to the detection information of each grid;According to the position of each lane center point in image and corresponding lane width, the lane line in image is determined.As a result, by this method for detecting lane lines, interference of the noise to lane detection is reduced, the robustness and accuracy of lane detection are improved.

Description

Method for detecting lane lines, device, electronic equipment and storage medium
Technical field
This application involves computer application technology more particularly to a kind of method for detecting lane lines, device, electronic equipments And storage medium.
Background technique
There are mainly two types of current method for detecting lane lines.One is extract the features such as lane line color, brightness, Jin Erjin Row edge detection extracts straight line using Hough transformation, and this method is easy the interference by other colors and brightness, and to noise ratio It is more sensitive, such as lane line is fuzzy, dark etc., poor robustness, accuracy in detection is poor.
Another method is to carry out lane line segmentation using neural network, the center line in the lane line region after extracting segmentation As lane detection result.This method is difficult to effectively be divided in overexposure, rainy weather, lane line boundary wear It cuts, and is easy the interference erroneous detection such as arrow, braking mark to be lane line, poor accuracy.
Summary of the invention
Method for detecting lane lines, device, electronic equipment and the storage medium that the application proposes, for solving in the related technology Method for detecting lane lines, be not only easy and to be easy to produce erroneous detection, poor accuracy is asked by noise jamming, poor robustness Topic.
The method for detecting lane lines that the application one side embodiment proposes, comprising: obtain image to be detected;By the figure As inputting preset target detection model, the detection information of each grid in described image is obtained, the detection information includes: each The corresponding lane center point lateral shift of a prediction block, lane center point score, prediction block width adjustment value;To each net The detection information of lattice carries out non-maxima suppression processing, obtains position and the correspondence of each lane center point in described image Lane width;According to the position of each lane center point in described image and corresponding lane width, the figure is determined Lane line as in.
The lane detection device that the application another aspect embodiment proposes, comprising: module is obtained, it is to be detected for obtaining Image;Input module obtains each grid in described image for described image to be inputted preset target detection model Detection information, the detection information include: the corresponding lane center point lateral shift of each prediction block, lane center point score, Prediction block width adjustment value;Processing module carries out non-maxima suppression processing for the detection information to each grid, obtains Take each lane center point in described image position and corresponding lane width;Determining module, for according to the figure The position of each lane center point as in and corresponding lane width, determine the lane line in described image.
The electronic equipment that the application another further aspect embodiment proposes comprising: memory, processor and it is stored in memory Computer program that is upper and can running on a processor, which is characterized in that the processor is realized as before when executing described program The method for detecting lane lines.
The computer readable storage medium that the application another further aspect embodiment proposes, is stored thereon with computer program, It is characterized in that, foregoing method for detecting lane lines is realized when described program is executed by processor.
The computer program that the another aspect embodiment of the application proposes, when which is executed by processor, to realize this Shen It please method for detecting lane lines described in embodiment.
Method for detecting lane lines, device, electronic equipment, computer readable storage medium and meter provided by the embodiments of the present application Calculation machine program, the image to be detected that can be will acquire inputs preset target detection model, to obtain each grid in image Detection information, detection information includes: the corresponding lane center point lateral shift of each prediction block, lane center point score, pre- Width of frame adjusted value is surveyed, and non-maxima suppression processing is carried out to the detection information of each grid, obtains each vehicle in image The position of road central point and corresponding lane width, and then according to the position of each lane center point in image and correspondence Lane width, determine the lane line in image.As a result, by the way that image to be detected is divided into multiple grids, and utilize instruction The lane center point and lane width for including in each grid of target detection model inspection perfected, later can be according to detection Multiple lane center points and lane width, determine the lane line in image, do to reduce noise to lane detection It disturbs, improves the robustness and accuracy of lane detection.
The additional aspect of the application and advantage will be set forth in part in the description, and will partially become from the following description It obtains obviously, or recognized by the practice of the application.
Detailed description of the invention
The application is above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments Obviously and it is readily appreciated that, in which:
Fig. 1 is a kind of flow diagram of method for detecting lane lines provided by the embodiment of the present application;
Fig. 2 is the flow diagram of another kind method for detecting lane lines provided by the embodiment of the present application;
Fig. 3 is a kind of structural schematic diagram of lane detection device provided by the embodiment of the present application;
Fig. 4 is the structural schematic diagram of electronic equipment provided by the embodiment of the present application.
Specific embodiment
Embodiments herein is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end Same or similar label indicates same or similar element.The embodiments described below with reference to the accompanying drawings are exemplary, It is intended for explaining the application, and should not be understood as the limitation to the application.
The embodiment of the present application is not only easy by noise jamming, robust for method for detecting lane lines in the related technology The problem of property is poor, and is easy to produce erroneous detection, poor accuracy, proposes a kind of method for detecting lane lines.
Method for detecting lane lines provided by the embodiments of the present application, the image to be detected that can be will acquire input preset mesh Detection model is marked, to obtain the detection information of each grid in image, detection information includes: in the corresponding lane of each prediction block Heart point lateral shift, lane center point score, prediction block width adjustment value, and the detection information of each grid is carried out non-very big It is worth inhibition processing, the position of each lane center point in acquisition image and corresponding lane width, and then according in image Each lane center point position and corresponding lane width, determine the lane line in image.Passing through as a result, will be to be detected Image be divided into multiple grids, and utilize the lane center point for including in the trained each grid of target detection model inspection And lane width, later the lane line in image can be determined according to the multiple lane center points and lane width of detection, from And interference of the noise to lane detection is reduced, improve the robustness and accuracy of lane detection.
Below with reference to the accompanying drawings to method for detecting lane lines provided by the present application, device, electronic equipment, storage medium and calculating Machine program is described in detail.
Fig. 1 is a kind of flow diagram of method for detecting lane lines provided by the embodiment of the present application.
As shown in Figure 1, the method for detecting lane lines, comprising the following steps:
Step 101, image to be detected is obtained.
It should be noted that the method for detecting lane lines of the embodiment of the present application, it can be by lane line provided herein Detection device executes.In actual use, method for detecting lane lines provided by the embodiment of the present application can be applied to automatic Pilot Field provides traffic information for automatic driving vehicle, appoints so that the lane detection device of the embodiment of the present application can be only fitted to In meaning vehicle, to execute the method for detecting lane lines of the application.
In the embodiment of the present application, the acquisition modes of image to be detected can be determined according to specific application scenarios.Than Such as, when the lane detection device of the embodiment of the present application is applied in automatic driving vehicle, available automatic driving vehicle In camera acquisition vehicle front information of road surface, as image to be detected.Specifically, lane detection device can be with It directly establishes and communicates to connect with camera, to directly acquire the realtime graphic of camera acquisition;Alternatively, camera can will acquire Image be stored in the storage equipment of vehicle, thus lane detection device can also be obtained from the storage equipment of vehicle to The image of detection.
Step 102, described image is inputted into preset target detection model, obtains the detection of each grid in described image Information, the detection information include: the corresponding lane center point lateral shift of each prediction block, lane center point score, prediction Width of frame adjusted value.
Wherein, preset target detection model can be what training in advance was completed, for example can be You Only Look Once:Unified, Real-Time Object Detection V2 (Yolo V2) algorithm model, Single Shot The one-stage target detection models such as MultiBox Detector (SSD) algorithm model, but it is not limited only to this.
Wherein, prediction block is that tool has the dimensions and position defined in preset target detection model, with to Grid in the image and image of detection is not directly linked, and is the tool that target detection is carried out to image.In actual use, The quantity and original dimension of prediction block and position etc., can be pre- according to actual needs such as required precision of prediction, computation complexities If the embodiment of the present application limits this, for example, the quantity of prediction block can be 5.
Lane center point lateral shift refers to laterally inclined between lane center point and the top left co-ordinate of its place grid It moves;Lane center point score refers to the confidence level of the corresponding lane center point of prediction block, can reflect out the corresponding vehicle of prediction block The reliability of road central point;Prediction block width adjustment value, is adjusted for the width to prediction block, current to obtain prediction block Width value.
Preferably, due to the target detection model in the embodiment of the present application be used for lane center point and lane width into Row detection, therefore prediction block can be defined as to the line segment with certain position and width, to can only be wrapped in detection information The width adjustment value for including prediction block, is adjusted with the width to prediction block.
In the embodiment of the present application, image to be detected can be divided into multiple grids first, and by figure to be detected As inputting preset target detection model, the characteristic pattern of image is obtained by the conventional part of preset target detection model, In, each pair of point in characteristic pattern answers a grid in image.Later according to the characteristic pattern of acquisition and image to be detected, lead to The recurrence part of preset target detection model is crossed, the detection information of each grid in image is obtained.
It should be noted that each grid in image is used for target of the pre- measured center in the grid.In actual use, The size of grid can be preset according to actual needs, and the embodiment of the present application does not limit this.For example, the ruler of image to be detected Very little is 1920 × 640 pixels, and image to be detected is divided into multiple grids having a size of 16 × 16 pixels, that is, the characteristic pattern obtained Size be 120 × 40 pixels.
Step 103, non-maxima suppression processing is carried out to the detection information of each grid, obtained in described image The position of each lane center point and corresponding lane width.
In the embodiment of the present application, by presetting multiple prediction blocks to target (the i.e. vehicle in grid each in image Road central point) it is detected, to guarantee the accuracy of lane detection.And due to the size of multiple prediction blocks difference, so that often The accuracy of the corresponding detection information of a prediction block is different, so as to determine each according to the detection information of each grid The highest prediction block of the corresponding accuracy of grid, and then according to the corresponding vehicle of the highest prediction block of the corresponding accuracy of each grid Road central point lateral shift, prediction block width adjustment value etc., determine the position of the lane center respectively included in each grid point It sets and the corresponding lane width in position of each lane center point, i.e., the position of each lane center point in image and right The lane width answered.
Specifically, non-maxima suppression processing can be carried out by the detection information to each grid, each grid is determined The corresponding highest prediction block of accuracy, so that it is determined that the position and corresponding lane of each lane center point in image out Width.I.e. in a kind of possible way of realization of the embodiment of the present application, above-mentioned steps 103 may include:
For each grid in described image, by the maximum prediction block of lane center point score corresponding in the grid It is determined as the corresponding optimum prediction frame of the grid;
For every row grid, optimum prediction frame of the score greater than threshold value is selected as target prediction frame every preset step-length;
For each target prediction frame, according to the corresponding lane center point lateral shift of the target prediction frame and prediction Width of frame adjusted value, determine a lane center point in described image position and corresponding lane width.
In the embodiment of the present application, the corresponding lane line central point score of prediction block, can reflect out prediction block to lane The accuracy of center point prediction, so as to according to the corresponding lane center point score of the corresponding each prediction block of each grid, really Determine the corresponding optimum prediction frame of each grid.Specifically, then being predicted since the corresponding lane line central point score of prediction block is bigger The corresponding lane center point lateral shift of frame is more accurate, so as to which lane center point score corresponding in each grid is maximum Prediction block, be determined as the corresponding optimum prediction frame of each grid.
It, can be according to preset step-length from every row after determining the corresponding optimum prediction frame of each grid in image In the corresponding optimum prediction frame of grid, the corresponding target prediction frame of every row network is selected.Specifically, can be every default step It is long, corresponding lane center point score is greater than to the optimum prediction frame of threshold value, is determined as target prediction frame.
For example, the size of image to be detected is 1920 × 640 pixels, and the size of each grid is 16 × 16 pictures Element, i.e., image to be detected include 120 × 40 grids, and preset step-length is 160 pixels, i.e., in every row grid, every 160 pixel Judge in this primary corresponding optimum prediction frame of corresponding grid of 160 pixel whether to include that lane center point score is greater than threshold value Whether optimum prediction frame, i.e., every 10 grids judge in this primary corresponding optimum prediction frame of 10 grids to include lane center point Score is greater than the optimum prediction frame of threshold value, if comprising the optimum prediction frame that lane center point score is greater than threshold value to be determined as Target prediction frame.
It should be noted that the example above is exemplary only, the limitation to the application cannot be considered as.In actual use, It preset step-length and threshold value, the embodiment of the present application can not limit this according to actual needs.
It in the embodiment of the present application, then can be corresponding according to each target prediction frame after determining target prediction frame Lane center point lateral shift and prediction block width adjustment value, determine the position of the corresponding lane center of each target prediction frame It sets and corresponding lane width, so that it is determined that the position and corresponding lane of lane center point all in image are wide out It spends, i.e., a lane center point in each target prediction frame correspondence image.
Specifically, according to the corresponding lane center point lateral shift of target prediction frame and prediction block width adjustment value, really Determine a lane center point in image position and corresponding lane width, comprising the following steps:
For each target prediction frame, according to the corresponding lane center point lateral shift of the target prediction frame, Yi Jisuo The coordinate for stating grid belonging to target prediction frame determines the position of a lane center point in described image;
According to the corresponding prediction block width adjustment value of the target prediction frame and the width of the target prediction frame, really Determine the corresponding lane width of one lane center point.
In the embodiment of the present application, due to the corresponding lane center point lateral shift of prediction block, refer to lane center point Abscissa is relative to the difference between the affiliated grid upper left corner abscissa of prediction block, so as to first according to target prediction frame The coordinate in the upper left corner of the grid belonging to it is determined in position, and then lateral according to the corresponding lane center point of target prediction frame Offset, determines the coordinate of the corresponding lane center point of target prediction frame, i.e., the position of a lane center point in image.
It should be noted that when being trained to preset target detection model, it can be by the lane in training data Width of the width as prediction block, so that the current width of prediction block can be determined as vehicle when detecting to lane line Road width.It therefore, can be according to the corresponding prediction block width adjustment value of target prediction frame and the width of target prediction frame, really The corresponding lane width of the corresponding lane center point of forecast with set objectives frame.Specifically, can be by the width and mesh of target prediction frame The sum of corresponding prediction block width adjustment value of prediction block is marked, it is corresponding to be determined as the corresponding lane center point of target prediction frame Lane width.
Step 104, according to the position of each lane center point in described image and corresponding lane width, institute is determined State the lane line in image.
In the embodiment of the present application, in the position and each lane for determining each lane center point for including in image After the corresponding lane width of heart point, then can position according to each lane center point and corresponding lane width, determine The each point for including on lane line out, so that it is determined that the lane line in image out.
Specifically, left-hand lane line and right-hand lane line are generally included to a lane, so as to according in each lane The position of heart point and its corresponding lane width determine the lane line of the left and right sides respectively.I.e. in the embodiment of the present application one kind In possible way of realization, above-mentioned steps 104 may include:
For each lane center point in described image, the lateral coordinates of lane center point position are subtracted into correspondence Lane width half, obtain the position of the corresponding left-hand lane line boundary point of the lane center point;
The half that the lateral coordinates of lane center point position are added to corresponding lane width, obtains in the lane The position of the corresponding right-hand lane line boundary point of heart point;
According to the position of the corresponding left-hand lane line boundary point of each lane center point and right-hand lane line boundary point Position determines the lane line in described image.
It is understood that the distance between corresponding lane line of each lane center point in image is wide for lane The half of degree, i.e., vertical coordinate is identical as the vertical coordinate of lane center point position and lateral coordinates and lane center point position Lateral coordinates difference be lane width half point, be located at the corresponding lane line of lane center point on.
In the embodiment of the present application, the lateral coordinates of lane center point position can be subtracted into its corresponding lane width Half determines the lateral coordinates of the corresponding left-hand lane line boundary point of lane center point, and erecting lane center point position Vertical coordinate to coordinate as the longitudinal corresponding left-hand lane line boundary point in lane, so that it is determined that lane center point is corresponding out The position of left-hand lane line boundary point;Correspondingly, the lateral coordinates of lane center point position can be added its corresponding lane The half of width, determines the lateral coordinates of the corresponding right-hand lane line boundary point of lane center point, and by lane center point Vertical coordinate of the vertical coordinate set as the longitudinal corresponding right-hand lane line boundary point in lane, so that it is determined that lane center point out The position of corresponding right-hand lane line boundary point.
It is understood that the line of the corresponding left-hand lane line boundary point of each lane center point is the left side in lane The line of lane line, the corresponding right-hand lane line boundary point of each lane center point is the right-hand lane line in lane.Thus Determine the corresponding left-hand lane line boundary point of each lane center point position and right-hand lane line boundary point position it Afterwards, the straight line where each left-hand lane line boundary point can be determined, i.e., according to the position of each left-hand lane line boundary point Left-hand lane line;According to the position of each right-hand lane line boundary point, determine straight where each right-hand lane line boundary point Line, i.e. right-hand lane line, so that it is determined that the lane line in image out.
Method for detecting lane lines provided by the embodiments of the present application, the image to be detected that can be will acquire input preset mesh Detection model is marked, to obtain the detection information of each grid in image, detection information includes: in the corresponding lane of each prediction block Heart point lateral shift, lane center point score, prediction block width adjustment value, and the detection information of each grid is carried out non-very big It is worth inhibition processing, the position of each lane center point in acquisition image and corresponding lane width, and then according in image Each lane center point position and corresponding lane width, determine the lane line in image.Passing through as a result, will be to be detected Image be divided into multiple grids, and utilize the lane center point for including in the trained each grid of target detection model inspection And lane width, later the lane line in image can be determined according to the multiple lane center points and lane width of detection, from And interference of the noise to lane detection is reduced, improve the robustness and accuracy of lane detection.
In a kind of possible way of realization of the application, preset target detection model be can be through a large amount of training datas It is trained to obtain, and continues to optimize the performance of target detection model by loss function, so that preset target detection mould The performance of type meets actual application demand.
Below with reference to Fig. 2, method for detecting lane lines provided by the embodiments of the present application is further described.
Fig. 2 is the flow diagram of another kind method for detecting lane lines provided by the embodiment of the present application.
As shown in Fig. 2, the method for detecting lane lines, comprising the following steps:
Step 201, training data is obtained, the training data includes: each in image and image greater than preset quantity The position of a true lane line boundary point.
Wherein, training data, by may include great amount of images data and to the markup information of each image data.It needs It is noted that the image data for including in training data and the markup information to image data, with target detection model Particular use is related.For example, may include largely including in training data if the purposes of target detection model is Face datection The image of face, and the markup information to face in image;For another example the purposes of the target detection model of the embodiment of the present application is Lane detection then may include a large amount of images comprising lane line in training data, and to lane line side true in image The markup information of the position of boundary's point.
It should be noted that training data needs to have one for the accuracy for guaranteeing the target detection model finally obtained Set pattern mould, when obtaining training data, wraps so as to the amount of images for including in preset in advance training data in training data The amount of images included has to be larger than preset quantity, to guarantee the performance of target detection model.In actual use, it is wrapped in training data The amount of images included can be preset according to actual needs, and the embodiment of the present application does not limit this.
In the embodiment of the present application, there are many approach for obtaining training data, for example, can collect from network includes vehicle The image of diatom, or the image data of (such as automatic Pilot scene) acquisition instruction can will be used as in actual application scenarios Practice data, and image data is labeled after getting image data, to obtain each true lane line side in image The position of boundary's point.
Step 202, initial target detection model is trained using the training data, until the target detection The loss function of model meets preset condition;The loss function according to the position of true lane line boundary point each in image with And the detection information of each grid determines in image.
In the embodiment of the present application, initial target detection model can be trained using training data, i.e., it will instruction The image data practiced in data sequentially inputs initial target detection model, to obtain the corresponding detection letter of each image data Breath, and then according to the corresponding each true lane line boundary of the corresponding detection information of each image data and each image data The position of point, the current value of loss function can determine that target is examined if the current value of loss function meets preset condition It surveys the current performance of model to meet the requirements, so as to terminate the training to target detection model;If loss function currently takes Value is unsatisfactory for preset condition, then can determine that the current performance of target detection model is unsatisfactory for requiring, so as to examine to target The parameter for surveying model optimizes, and continues with training data and be trained to the target detection model after parameter optimization, directly Loss function to target detection model meets preset condition.
It should be noted that the value of loss function is smaller, then illustrate the detection information of target detection model output and true The position of real lane line boundary point is closer, i.e. the performance of target detection model is better, therefore, the loss of target detection model Function needs the preset condition met, and the value that can be loss function is less than preset threshold.In actual use, loss function needs The preset condition to be met can be preset according to actual needs, and the embodiment of the present application does not limit this.
It preferably, in the embodiment of the present application, can be lateral to lane center point when being trained to target detection model Three offset, lane center point score and lane width parts are returned, i.e. the loss function of target detection model can be with It is divided into three parts, respectively to lane center point lateral shift, lane center point score and lane width three parts Loss is punished respectively, so as to further increase the accuracy of the target detection model finally obtained.It optionally, can be with Lane center point lateral shift, lane center point score are returned using L2 norm loss function, damaged using L1smooth Function is lost to return lane width.In actual use, the corresponding loss function of each section can be selected according to actual needs, The embodiment of the present application does not limit this.
It should be noted that when the loss function of target detection model is divided into three parts, it can be in loss function When three parts meet preset condition respectively, the training to target detection model is completed;Alternatively, can also be the three of loss function When the sum of the value of a part meets preset condition, the training to target detection model is completed, the embodiment of the present application does not do this It limits.
Step 203, it obtains image to be detected, and described image is inputted into preset target detection model, described in acquisition The detection information of each grid in image, the detection information include: the corresponding lane center point lateral shift of each prediction block, Lane center point score, prediction block width adjustment value.
In the embodiment of the present application, preset target detection model may include conventional part and return part, will be to be checked The image of survey inputs preset target detection model, can obtain image by the conventional part of preset target detection model Characteristic pattern, wherein each pair of point in characteristic pattern answers a grid in image.Later according to the characteristic pattern of acquisition and to be checked Altimetric image obtains the detection information of each grid in image by the recurrence part of preset target detection model.
Further, the target detection model of the embodiment of the present application can be by the shallow-layer feature and further feature knot of image It closes, to extract more effective structure feature, to improve the accuracy of target detection model.I.e. in the embodiment of the present application one kind In possible way of realization, above-mentioned conventional part, for obtaining the low-level image feature of described image different depth, to different depth Low-level image feature carries out dimensionality reduction, deconvolution and joint convolution operation, obtains the corresponding characteristic pattern of described image;In the characteristic pattern It include: the corresponding characteristic point of each grid in described image;
The detection information that above-mentioned recurrence part is used to that image and corresponding characteristic pattern to be combined to obtain each grid.
It should be noted that neural network model used in the target detection model of the embodiment of the present application, may include Multiple convolutional layers, so as to carry out the convolution operation of different depth to image by multiple convolutional layers of conventional part, to obtain Obtain the low-level image feature of the corresponding different depth of the corresponding image of image, wherein the depth of low-level image feature is different, corresponding feature The size of figure is also different.For example, the size of the characteristic pattern of low-level image feature conv5_5 is the 1/32 of image, low-level image feature conv6_5 The size of characteristic pattern be the 1/64 of image, the size of the characteristic pattern of low-level image feature conv7_5 is the 1/128 of image.
After the low-level image feature for getting the corresponding different depth of image, the low-level image feature of different depth can be carried out Dimensionality reduction, for example convolution operation can be carried out by low-level image feature of 1 × 1 convolution kernel to different depth, to obtain to different depths The low-level image feature of degree carries out the characteristic pattern after dimensionality reduction, carries out different depth to the low-level image feature of the different depth after dimensionality reduction later Deconvolution operation so that the low-level image feature of the different depth after dimensionality reduction is of the same size, i.e., so that after dimensionality reduction not It is identical with the number of grid for including in the size of the low-level image feature of depth and image.For example, in image grid size be 16 × 16, then after the deconvolution operation for carrying out different depth to the low-level image feature of the different depth after dimensionality reduction, the ruler of the characteristic pattern of acquisition Very little is the 1/16 of image,.Joint convolution operation finally is carried out to the characteristic pattern after the deconvolution operation for carrying out different depth, To obtain the corresponding characteristic pattern of image, and each characteristic point in characteristic pattern is corresponding with a grid in image.
Step 204, non-maxima suppression processing is carried out to the detection information of each grid, obtained in described image The position of each lane center point and corresponding lane width.
Step 205, according to the position of each lane center point in described image and corresponding lane width, institute is determined State the lane line in image.
The specific implementation process and principle of above-mentioned steps 204-205, is referred to the detailed description of above-described embodiment, herein It repeats no more.
Method for detecting lane lines provided by the embodiments of the present application can examine initial target using the training data obtained It surveys model to be trained, until the loss function of target detection model meets preset condition, wherein loss function is according in image The detection information of each grid determines in the position of each true lane line boundary point and image, and will acquire to be detected Image inputs preset target detection model, and to obtain the detection information of each grid in image, detection information includes: each pre- The corresponding lane center point lateral shift of frame, lane center point score, prediction block width adjustment value are surveyed, and to each grid Detection information carries out non-maxima suppression processing, and position and the corresponding lane for obtaining each lane center point in image are wide Degree, and then according to the position of each lane center point in image and corresponding lane width, determine the lane line in image. A large amount of training datas are trained initial target detection model as a result, and utilize trained target detection model inspection The lane center point and lane width that each grid includes in image, do lane detection to not only reduce noise It disturbs, improves the robustness and accuracy of lane detection, and advanced optimized the performance of target detection model.
In order to realize above-described embodiment, the application also proposes a kind of lane detection device.
Fig. 3 is a kind of structural schematic diagram of lane detection device provided by the embodiments of the present application.
As shown in figure 3, the lane detection device 30, comprising:
Module 31 is obtained, for obtaining image to be detected;
Input module 32 obtains each net in described image for described image to be inputted preset target detection model The detection information of lattice, the detection information include: the corresponding lane center point lateral shift of each prediction block, lane center point minute Number, prediction block width adjustment value;
Processing module 33 carries out non-maxima suppression processing for the detection information to each grid, described in acquisition The position of each lane center point in image and corresponding lane width;
Determining module 34, for wide according to the position of each lane center point in described image and corresponding lane Degree, determines the lane line in described image.
In actual use, lane detection device provided by the embodiments of the present application can be configured in any electronics and set In standby, to execute aforementioned method for detecting lane lines.
Lane detection device provided by the embodiments of the present application, the image to be detected that can be will acquire input preset mesh Detection model is marked, to obtain the detection information of each grid in image, detection information includes: in the corresponding lane of each prediction block Heart point lateral shift, lane center point score, prediction block width adjustment value, and the detection information of each grid is carried out non-very big It is worth inhibition processing, the position of each lane center point in acquisition image and corresponding lane width, and then according in image Each lane center point position and corresponding lane width, determine the lane line in image.Passing through as a result, will be to be detected Image be divided into multiple grids, and utilize the lane center point for including in the trained each grid of target detection model inspection And lane width, later the lane line in image can be determined according to the multiple lane center points and lane width of detection, from And interference of the noise to lane detection is reduced, improve the robustness and accuracy of lane detection.
In a kind of possible way of realization of the application, above-mentioned processing module 33 is specifically used for:
For each grid in described image, by the maximum prediction block of lane center point score corresponding in the grid It is determined as the corresponding optimum prediction frame of the grid;
For every row grid, optimum prediction frame of the score greater than threshold value is selected as target prediction frame every preset step-length;
For each target prediction frame, according to the corresponding lane center point lateral shift of the target prediction frame and prediction Width of frame adjusted value, determine a lane center point in described image position and corresponding lane width.
Further, in the alternatively possible way of realization of the application, above-mentioned processing module 33 is also used to:
For each target prediction frame, according to the corresponding lane center point lateral shift of the target prediction frame, Yi Jisuo The coordinate for stating grid belonging to target prediction frame determines the position of a lane center point in described image;
According to the corresponding prediction block width adjustment value of the target prediction frame and the width of the target prediction frame, really Determine the corresponding lane width of one lane center point.
In a kind of possible way of realization of the application, above-mentioned determining module 34 is specifically used for:
For each lane center point in described image, the lateral coordinates of lane center point position are subtracted into correspondence Lane width half, obtain the position of the corresponding left-hand lane line boundary point of the lane center point;
The half that the lateral coordinates of lane center point position are added to corresponding lane width, obtains in the lane The position of the corresponding right-hand lane line boundary point of heart point;
According to the position of the corresponding left-hand lane line boundary point of each lane center point and right-hand lane line boundary point Position determines the lane line in described image.
In a kind of possible way of realization of the application, above-mentioned target detection model includes: conventional part and recurrence part;
The conventional part, for obtaining the low-level image feature of described image different depth, to the low-level image feature of different depth Dimensionality reduction, deconvolution and joint convolution operation are carried out, the corresponding characteristic pattern of described image is obtained;It include: institute in the characteristic pattern State the corresponding characteristic point of each grid in image;
The recurrence part is for obtaining the detection information of each grid in conjunction with image and corresponding characteristic pattern.
Further, in the alternatively possible way of realization of the application, above-mentioned lane detection device 30, further includes: Training module;
Correspondingly, above-mentioned acquisition module 31, is also used to obtain training data, the training data includes: greater than present count The position of each true lane line boundary point in the image and image of amount;
Above-mentioned training module, specifically for being trained using the training data to initial target detection model, directly Loss function to the target detection model meets preset condition;The loss function is according to true lane line each in image The detection information of each grid determines in the position of boundary point and image.
It should be noted that the aforementioned explanation to Fig. 1, method for detecting lane lines embodiment shown in Fig. 2 is also suitable In the lane detection device 30 of the embodiment, details are not described herein again.
Lane detection device provided by the embodiments of the present application can examine initial target using the training data obtained It surveys model to be trained, until the loss function of target detection model meets preset condition, wherein loss function is according in image The detection information of each grid determines in the position of each true lane line boundary point and image, and will acquire to be detected Image inputs preset target detection model, and to obtain the detection information of each grid in image, detection information includes: each pre- The corresponding lane center point lateral shift of frame, lane center point score, prediction block width adjustment value are surveyed, and to each grid Detection information carries out non-maxima suppression processing, and position and the corresponding lane for obtaining each lane center point in image are wide Degree, and then according to the position of each lane center point in image and corresponding lane width, determine the lane line in image. A large amount of training datas are trained initial target detection model as a result, and utilize trained target detection model inspection The lane center point and lane width that each grid includes in image, do lane detection to not only reduce noise It disturbs, improves the robustness and accuracy of lane detection, and advanced optimized the performance of target detection model.
In order to realize above-described embodiment, the application also proposes a kind of electronic equipment.
Fig. 4 is the structural schematic diagram of the electronic equipment of the application one embodiment.
As shown in figure 4, above-mentioned electronic equipment 200 includes:
Memory 210 and processor 220 connect the bus 230 of different components (including memory 210 and processor 220), Memory 210 is stored with computer program, realizes lane described in the embodiment of the present application when processor 220 executes described program Line detecting method.
Bus 230 indicates one of a few class bus structures or a variety of, including memory bus or Memory Controller, Peripheral bus, graphics acceleration port, processor or the local bus using any bus structures in a variety of bus structures.It lifts For example, these architectures include but is not limited to industry standard architecture (ISA) bus, microchannel architecture (MAC) Bus, enhanced isa bus, Video Electronics Standards Association (VESA) local bus and peripheral component interconnection (PCI) bus.
Electronic equipment 200 typically comprises various electronic readable medium.These media can be it is any can be electric The usable medium that sub- equipment 200 accesses, including volatile and non-volatile media, moveable and immovable medium.
Memory 210 can also include the computer system readable media of form of volatile memory, such as arbitrary access Memory (RAM) 240 and/or cache memory 250.Electronic equipment 200 may further include it is other it is removable/can not Mobile, volatile/non-volatile computer system storage medium.Only as an example, storage system 260 can be used for reading and writing not Movably, non-volatile magnetic media (Fig. 4 do not show, commonly referred to as " hard disk drive ").It although not shown in fig 4, can be with The disc driver for reading and writing to removable non-volatile magnetic disk (such as " floppy disk ") is provided, and non-volatile to moving The CD drive of CD (such as CD-ROM, DVD-ROM or other optical mediums) read-write.In these cases, each driving Device can be connected by one or more data media interfaces with bus 230.Memory 210 may include at least one program Product, the program product have one group of (for example, at least one) program module, these program modules are configured to perform the application The function of each embodiment.
Program/utility 280 with one group of (at least one) program module 270, can store in such as memory In 210, such program module 270 includes --- but being not limited to --- operating system, one or more application program, other It may include the realization of network environment in program module and program data, each of these examples or certain combination.Journey Sequence module 270 usually executes function and/or method in embodiments described herein.
Electronic equipment 200 can also be with one or more external equipments 290 (such as keyboard, sensing equipment, display 291 Deng) communication, can also be enabled a user to one or more equipment interact with the electronic equipment 200 communicate, and/or with make Any equipment (such as network interface card, the modem that the electronic equipment 200 can be communicated with one or more of the other calculating equipment Etc.) communication.This communication can be carried out by input/output (I/O) interface 292.Also, electronic equipment 200 can also lead to Cross network adapter 293 and one or more network (such as local area network (LAN), wide area network (WAN) and/or public network, example Such as internet) communication.As shown, network adapter 293 is communicated by bus 230 with other modules of electronic equipment 200.It answers When understanding, although not shown in the drawings, other hardware and/or software module can be used in conjunction with electronic equipment 200, including but unlimited In: microcode, device driver, redundant processing unit, external disk drive array, RAID system, tape drive and number According to backup storage system etc..
Program of the processor 220 by operation storage in memory 210, thereby executing various function application and data Processing.
It should be noted that the implementation process and technical principle of the electronic equipment of the present embodiment are referring to aforementioned to the application reality The explanation of the method for detecting lane lines of example is applied, details are not described herein again.
Electronic equipment provided by the embodiments of the present application can execute foregoing method for detecting lane lines, will acquire Image to be detected inputs preset target detection model, to obtain the detection information of each grid in image, detection information packet It includes: the corresponding lane center point lateral shift of each prediction block, lane center point score, prediction block width adjustment value, and to each The detection information of a grid carries out non-maxima suppression processing, obtains position and the correspondence of each lane center point in image Lane width determine in image and then according to the position of each lane center point in image and corresponding lane width Lane line.As a result, by the way that image to be detected is divided into multiple grids, and utilize trained target detection model inspection The lane center point and lane width for including in each grid, later can be wide according to multiple lane center points of detection and lane Degree, determines the lane line in image, to reduce interference of the noise to lane detection, improves the Shandong of lane detection Stick and accuracy.
In order to realize above-described embodiment, the application also proposes a kind of computer readable storage medium.
Wherein, the computer readable storage medium, is stored thereon with computer program, when which is executed by processor, To realize method for detecting lane lines described in the embodiment of the present application.
In order to realize above-described embodiment, the application another further aspect embodiment provides a kind of computer program, which is located When managing device execution, to realize method for detecting lane lines described in the embodiment of the present application.
In a kind of optional way of realization, the present embodiment can be using any group of one or more computer-readable media It closes.Computer-readable medium can be computer-readable signal media or computer readable storage medium.It is computer-readable to deposit Storage media for example may be-but not limited to-system, device or the device of electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor Part, or any above combination.The more specific example (non exhaustive list) of computer readable storage medium includes: to have The electrical connection of one or more conducting wires, portable computer diskette, hard disk, random access memory (RAM), read-only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD- ROM), light storage device, magnetic memory device or above-mentioned any appropriate combination.In this document, computer-readable storage Medium can be any tangible medium for including or store program, which can be commanded execution system, device or device Using or it is in connection.
Computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal, Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including --- but It is not limited to --- electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be Any computer-readable medium other than computer readable storage medium, which can send, propagate or Transmission is for by the use of instruction execution system, device or device or program in connection.
The program code for including on computer-readable medium can transmit with any suitable medium, including --- but it is unlimited In --- wireless, electric wire, optical cable, RF etc. or above-mentioned any appropriate combination.
Can with one or more programming languages or combinations thereof come write for execute the application operation computer Program code, described program design language include object oriented program language-such as Java, Smalltalk, C++, It further include conventional procedural programming language-such as " C " language or similar programming language.Program code can be with It is fully executed on consumer electronic devices, partly executes on consumer electronic devices, held as an independent software package Row, partially part executes in devices in remote electronic or completely in devices in remote electronic or service on consumer electronic devices It is executed on device.In the situation for being related to devices in remote electronic, devices in remote electronic can pass through the network of any kind --- packet It includes local area network (LAN) or wide area network (WAN)-is connected to consumer electronic devices, or, it may be connected to external electronic device (example It is such as connected using ISP by internet).
Those skilled in the art will readily occur to its of the application after considering specification and practicing the invention applied here Its embodiment.This application is intended to cover any variations, uses, or adaptations of the application, these modifications, purposes or The common knowledge in the art that person's adaptive change follows the general principle of the application and do not invent including the application Or conventional techniques.The description and examples are only to be considered as illustrative, and the true scope and spirit of the application are wanted by right It asks and points out.
It should be understood that the application is not limited to the precise structure that has been described above and shown in the drawings, and And various modifications and changes may be made without departing from the scope thereof.Scope of the present application is only limited by the accompanying claims.

Claims (15)

1. a kind of method for detecting lane lines characterized by comprising
Obtain image to be detected;
Described image is inputted into preset target detection model, obtains the detection information of each grid in described image, the inspection Measurement information includes: the corresponding lane center point lateral shift of each prediction block, lane center point score, prediction block width adjustment Value;
Non-maxima suppression processing is carried out to the detection information of each grid, obtains each lane center in described image The position of point and corresponding lane width;
According to the position of each lane center point in described image and corresponding lane width, the vehicle in described image is determined Diatom.
2. the method according to claim 1, wherein the detection information to each grid carries out non-pole Big value inhibition is handled, the position of each lane center point in acquisition described image and corresponding lane width, comprising:
For each grid in described image, the maximum prediction block of lane center point score corresponding in the grid is determined For the corresponding optimum prediction frame of the grid;
For every row grid, optimum prediction frame of the score greater than threshold value is selected as target prediction frame every preset step-length;
For each target prediction frame, according to the corresponding lane center point lateral shift of the target prediction frame and prediction frame width Spend adjusted value, determine a lane center point in described image position and corresponding lane width.
3. according to the method described in claim 2, it is characterized in that, described be directed to each target prediction frame, according to the target The corresponding lane center point lateral shift of prediction block and prediction block width adjustment value, determine in a lane in described image The position of heart point and corresponding lane width, comprising:
For each target prediction frame, according to the corresponding lane center point lateral shift of the target prediction frame and the mesh The coordinate for marking grid belonging to prediction block, determines the position of a lane center point in described image;
According to the corresponding prediction block width adjustment value of the target prediction frame and the width of the target prediction frame, institute is determined State the corresponding lane width of a lane center point.
4. the method according to claim 1, wherein each lane center point according in described image Position and corresponding lane width, determine the lane line in described image, comprising:
For each lane center point in described image, the lateral coordinates of lane center point position are subtracted into corresponding vehicle The half of road width obtains the position of the corresponding left-hand lane line boundary point of the lane center point;
The half that the lateral coordinates of lane center point position are added to corresponding lane width, obtains the lane center point The position of corresponding right-hand lane line boundary point;
According to the position of the corresponding left-hand lane line boundary point of each lane center point and the position of right-hand lane line boundary point, Determine the lane line in described image.
5. the method according to claim 1, wherein the target detection model includes: conventional part and recurrence Part;
The conventional part carries out the low-level image feature of different depth for obtaining the low-level image feature of described image different depth Dimensionality reduction, deconvolution and joint convolution operation, obtain the corresponding characteristic pattern of described image;It include: the figure in the characteristic pattern The corresponding characteristic point of each grid as in;
The recurrence part is for obtaining the detection information of each grid in conjunction with image and corresponding characteristic pattern.
6. the method according to claim 1, wherein described input preset target detection mould for described image Type obtains in described image before the detection information of each grid, further includes:
Training data is obtained, the training data includes: greater than true lane line each in the image of preset quantity and image The position of boundary point;
Initial target detection model is trained using the training data, until the loss letter of the target detection model Number meets preset condition;The loss function is according to each in the position of true lane line boundary point each in image and image The detection information of grid determines.
7. a kind of lane detection device characterized by comprising
Module is obtained, for obtaining image to be detected;
Input module obtains the inspection of each grid in described image for described image to be inputted preset target detection model Measurement information, the detection information includes: the corresponding lane center point lateral shift of each prediction block, lane center point score, pre- Survey width of frame adjusted value;
Processing module carries out non-maxima suppression processing for the detection information to each grid, obtains in described image Each lane center point position and corresponding lane width;
Determining module, for according to each lane center point in described image position and corresponding lane width, determine Lane line in described image.
8. device according to claim 7, which is characterized in that the processing module is specifically used for,
For each grid in described image, the maximum prediction block of lane center point score corresponding in the grid is determined For the corresponding optimum prediction frame of the grid;
For every row grid, optimum prediction frame of the score greater than threshold value is selected as target prediction frame every preset step-length;
For each target prediction frame, according to the corresponding lane center point lateral shift of the target prediction frame and prediction frame width Spend adjusted value, determine a lane center point in described image position and corresponding lane width.
9. device according to claim 8, which is characterized in that the processing module is specifically used for,
For each target prediction frame, according to the corresponding lane center point lateral shift of the target prediction frame and the mesh The coordinate for marking grid belonging to prediction block, determines the position of a lane center point in described image;
According to the corresponding prediction block width adjustment value of the target prediction frame and the width of the target prediction frame, institute is determined State the corresponding lane width of a lane center point.
10. device according to claim 7, which is characterized in that the determining module is specifically used for,
For each lane center point in described image, the lateral coordinates of lane center point position are subtracted into corresponding vehicle The half of road width obtains the position of the corresponding left-hand lane line boundary point of the lane center point;
The half that the lateral coordinates of lane center point position are added to corresponding lane width, obtains the lane center point The position of corresponding right-hand lane line boundary point;
According to the position of the corresponding left-hand lane line boundary point of each lane center point and the position of right-hand lane line boundary point, Determine the lane line in described image.
11. device according to claim 7, which is characterized in that the target detection model includes: conventional part and recurrence Part;
The conventional part carries out the low-level image feature of different depth for obtaining the low-level image feature of described image different depth Dimensionality reduction, deconvolution and joint convolution operation, obtain the corresponding characteristic pattern of described image;It include: the figure in the characteristic pattern The corresponding characteristic point of each grid as in;
The recurrence part is for obtaining the detection information of each grid in conjunction with image and corresponding characteristic pattern.
12. device according to claim 7, which is characterized in that further include: training module;
The acquisition module is also used to obtain training data, and the training data includes: the image greater than preset quantity, and The position of each true lane line boundary point in image;
The training module, for being trained using the training data to initial target detection model, until the mesh The loss function of mark detection model meets preset condition;The loss function is according to true lane line boundary point each in image The detection information of each grid determines in position and image.
13. a kind of electronic equipment characterized by comprising
Memory, processor and storage are on a memory and the computer program that can run on a processor, which is characterized in that institute State the method for detecting lane lines realized as described in claim 1-6 is any when processor executes described program.
14. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor The method for detecting lane lines as described in claim 1-6 is any is realized when execution.
15. a kind of computer program product realizes such as right when the instruction processing unit in the computer program product executes It is required that any method for detecting lane lines of 1-6.
CN201910536493.3A 2019-06-20 2019-06-20 Lane line detection method, lane line detection device, electronic device, and storage medium Active CN110276293B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910536493.3A CN110276293B (en) 2019-06-20 2019-06-20 Lane line detection method, lane line detection device, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910536493.3A CN110276293B (en) 2019-06-20 2019-06-20 Lane line detection method, lane line detection device, electronic device, and storage medium

Publications (2)

Publication Number Publication Date
CN110276293A true CN110276293A (en) 2019-09-24
CN110276293B CN110276293B (en) 2021-07-27

Family

ID=67962302

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910536493.3A Active CN110276293B (en) 2019-06-20 2019-06-20 Lane line detection method, lane line detection device, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN110276293B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110969655A (en) * 2019-10-24 2020-04-07 百度在线网络技术(北京)有限公司 Method, device, equipment, storage medium and vehicle for detecting parking space
CN112199999A (en) * 2020-09-09 2021-01-08 浙江大华技术股份有限公司 Road detection method, road detection device, storage medium and electronic equipment
CN112232431A (en) * 2020-10-23 2021-01-15 携程计算机技术(上海)有限公司 Watermark detection model training method, watermark detection method, system, device and medium
CN112434591A (en) * 2020-11-19 2021-03-02 腾讯科技(深圳)有限公司 Lane line determination method and device
CN112528864A (en) * 2020-12-14 2021-03-19 北京百度网讯科技有限公司 Model generation method and device, electronic equipment and storage medium
CN113715816A (en) * 2021-09-30 2021-11-30 岚图汽车科技有限公司 Lane centering function control method, device and equipment and readable storage medium
CN114694109A (en) * 2022-05-31 2022-07-01 苏州魔视智能科技有限公司 Lane line detection method, device, electronic device and computer-readable storage medium
CN115166743A (en) * 2022-08-30 2022-10-11 长沙隼眼软件科技有限公司 Lane automatic calibration method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106842231A (en) * 2016-11-08 2017-06-13 长安大学 A kind of road edge identification and tracking
US20180345960A1 (en) * 2017-06-06 2018-12-06 Toyota Jidosha Kabushiki Kaisha Lane change assist device
CN109740469A (en) * 2018-12-24 2019-05-10 百度在线网络技术(北京)有限公司 Method for detecting lane lines, device, computer equipment and storage medium
CN109753841A (en) * 2017-11-01 2019-05-14 比亚迪股份有限公司 Lane detection method and apparatus
CN109829351A (en) * 2017-11-23 2019-05-31 华为技术有限公司 Detection method, device and the computer readable storage medium of lane information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106842231A (en) * 2016-11-08 2017-06-13 长安大学 A kind of road edge identification and tracking
US20180345960A1 (en) * 2017-06-06 2018-12-06 Toyota Jidosha Kabushiki Kaisha Lane change assist device
CN109753841A (en) * 2017-11-01 2019-05-14 比亚迪股份有限公司 Lane detection method and apparatus
CN109829351A (en) * 2017-11-23 2019-05-31 华为技术有限公司 Detection method, device and the computer readable storage medium of lane information
CN109740469A (en) * 2018-12-24 2019-05-10 百度在线网络技术(北京)有限公司 Method for detecting lane lines, device, computer equipment and storage medium

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110969655A (en) * 2019-10-24 2020-04-07 百度在线网络技术(北京)有限公司 Method, device, equipment, storage medium and vehicle for detecting parking space
CN110969655B (en) * 2019-10-24 2023-08-18 百度在线网络技术(北京)有限公司 Method, device, equipment, storage medium and vehicle for detecting parking space
CN112199999A (en) * 2020-09-09 2021-01-08 浙江大华技术股份有限公司 Road detection method, road detection device, storage medium and electronic equipment
CN112232431A (en) * 2020-10-23 2021-01-15 携程计算机技术(上海)有限公司 Watermark detection model training method, watermark detection method, system, device and medium
CN112434591A (en) * 2020-11-19 2021-03-02 腾讯科技(深圳)有限公司 Lane line determination method and device
CN112434591B (en) * 2020-11-19 2022-06-17 腾讯科技(深圳)有限公司 Lane line determination method and device
CN112528864A (en) * 2020-12-14 2021-03-19 北京百度网讯科技有限公司 Model generation method and device, electronic equipment and storage medium
CN113715816A (en) * 2021-09-30 2021-11-30 岚图汽车科技有限公司 Lane centering function control method, device and equipment and readable storage medium
CN114694109A (en) * 2022-05-31 2022-07-01 苏州魔视智能科技有限公司 Lane line detection method, device, electronic device and computer-readable storage medium
CN115166743A (en) * 2022-08-30 2022-10-11 长沙隼眼软件科技有限公司 Lane automatic calibration method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110276293B (en) 2021-07-27

Similar Documents

Publication Publication Date Title
CN110276293A (en) Method for detecting lane lines, device, electronic equipment and storage medium
CN110286387B (en) Obstacle detection method and device applied to automatic driving system and storage medium
CN110263713A (en) Method for detecting lane lines, device, electronic equipment and storage medium
CN110263714A (en) Method for detecting lane lines, device, electronic equipment and storage medium
CN103383731B (en) A kind of projection interactive method based on finger tip location, system and the equipment of calculating
CN110232368A (en) Method for detecting lane lines, device, electronic equipment and storage medium
EP3937077B1 (en) Lane marking detecting method, apparatus, electronic device, storage medium, and vehicle
CN113378760A (en) Training target detection model and method and device for detecting target
CN108876857A (en) Localization method, system, equipment and the storage medium of automatic driving vehicle
CN111402326A (en) Obstacle detection method and device, unmanned vehicle and storage medium
CN108268831A (en) The robustness test method and system of a kind of unmanned vehicle vision-based detection
CN115273039A (en) Small obstacle detection method based on camera
CN112863187A (en) Detection method of perception model, electronic equipment, road side equipment and cloud control platform
CN110111018B (en) Method, device, electronic equipment and storage medium for evaluating vehicle sensing capability
CN115139303A (en) Grid well lid detection method, device, equipment and storage medium
CN113537026B (en) Method, device, equipment and medium for detecting graphic elements in building plan
CN114387576A (en) Lane line identification method, system, medium, device and information processing terminal
CN113591569A (en) Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium
CN103837135A (en) Workpiece detecting method and system
CN117392629A (en) Multi-mode descriptor location recognition method and system based on camera and radar fusion
CN112395956A (en) Method and system for detecting passable area facing complex environment
CN111951328A (en) Object position detection method, device, equipment and storage medium
CN114120795B (en) Map drawing method and device
CN115618602A (en) Lane-level scene simulation method and system
CN109816726A (en) A kind of visual odometry map updating method and system based on depth filter

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211011

Address after: 105 / F, building 1, No. 10, Shangdi 10th Street, Haidian District, Beijing 100085

Patentee after: Apollo Intelligent Technology (Beijing) Co.,Ltd.

Address before: 100085 Baidu Building, 10 Shangdi Tenth Street, Haidian District, Beijing

Patentee before: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) Co.,Ltd.