CN109583393A - A kind of lane line endpoints recognition methods and device, equipment, medium - Google Patents

A kind of lane line endpoints recognition methods and device, equipment, medium Download PDF

Info

Publication number
CN109583393A
CN109583393A CN201811478746.8A CN201811478746A CN109583393A CN 109583393 A CN109583393 A CN 109583393A CN 201811478746 A CN201811478746 A CN 201811478746A CN 109583393 A CN109583393 A CN 109583393A
Authority
CN
China
Prior art keywords
lane line
bounding box
endpoints
target detection
sample image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811478746.8A
Other languages
Chinese (zh)
Other versions
CN109583393B (en
Inventor
高三元
冯汉平
鞠伟平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kuandong (Huzhou) Technology Co.,Ltd.
Original Assignee
Wide Bench (beijing) Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wide Bench (beijing) Technology Co Ltd filed Critical Wide Bench (beijing) Technology Co Ltd
Priority to CN201811478746.8A priority Critical patent/CN109583393B/en
Publication of CN109583393A publication Critical patent/CN109583393A/en
Application granted granted Critical
Publication of CN109583393B publication Critical patent/CN109583393B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

This application discloses a kind of lane line endpoints recognition methods and device, equipment, media.This method includes at least: according to the lane line for including in lane line sample image, define the bounding box for confining the lane line endpoints for including in lane line sample image, according to the definition of the bounding box to lane line endpoints, utilize the algorithm of target detection based on convolutional neural networks, target detection is carried out in images to be recognized, to identify the bounding box of lane line endpoints, according to target detection as a result, determining the position of the lane line endpoints.The application is returned based on bounding box by being that lane line endpoints define suitable bounding box according to lane line and carries out target detection, helped accurately to identify lane line endpoints in images to be recognized, determine its position.

Description

A kind of lane line endpoints recognition methods and device, equipment, medium
Technical field
This application involves machine learning techniques field more particularly to a kind of lane line endpoints recognition methods and device, equipment, Medium.
Background technique
With the rapid development of machine learning techniques, the depth model based on deep learning is also used in more and more Side, including critical point detection field.Bigger difference may be had by closing application effect of the measuring point detection in different scenes.
For example, under the scene of face critical point detection, due to face key point be often positioned in it is among image or close Intermediate region, helps to obtain accurate testing result;Under the scape of high-precision map fabricating yard, an important task It is the endpoint for extracting dotted line lane line, in the prior art, also by the scheme of face critical point detection, for detecting dotted line vehicle The endpoint of diatom.
But since the endpoint of dotted line lane line is usually located at the region of image border, be often not readily available compared with For accurate testing result.
Summary of the invention
The embodiment of the present application provides a kind of lane line endpoints recognition methods and device, equipment, medium, to solve existing skill Following technical problem in art: existing critical point detection scheme is not readily available accurate dotted line lane line end-point detection As a result.
The embodiment of the present application adopts the following technical solutions:
A kind of lane line endpoints recognition methods, comprising:
According to the lane line for including in lane line sample image, definition includes for confining in the lane line sample image Lane line endpoints bounding box;
According to the definition of the bounding box to the lane line endpoints, calculated using the target detection based on convolutional neural networks Method carries out target detection in images to be recognized, to identify the bounding box of lane line endpoints;
According to the target detection as a result, determining the position of the lane line endpoints.
Optionally, it is described according to the target detection as a result, determine the position of the lane line endpoints, specifically include:
Using image segmentation algorithm, image segmentation is carried out in the bounding box identified, to carry out prospect and background Segmentation;
According to the target detection as a result, and described image segmentation as a result, determining the position of the lane line endpoints.
Optionally, described according to the lane line for including in lane line sample image, it defines for confining the lane line sample The bounding box for the lane line endpoints for including in this image, specifically includes:
Define the bounding box for confining the lane line for including in lane line sample image;
According to the width and/or height of the bounding box of the lane line, definition is wrapped for confining in the lane line sample image The bounding box of the lane line endpoints contained.
Optionally, the width and/or height of the bounding box according to the lane line is defined for confining the lane line sample The bounding box for the lane line endpoints for including in this image, further includes:
According to preset size threshold, the full-size of the bounding box of the lane line endpoints is limited.
Optionally, the shape of the bounding box of the lane line endpoints is square, the side length of the square be not more than with Under minimum value in several persons: the size threshold, the width of the bounding box of the lane line, height.
Optionally, definition of the basis to the bounding box of the lane line endpoints, using based on convolutional neural networks Algorithm of target detection carries out target detection in images to be recognized, specifically includes:
Obtain multiple lane line sample images at least one lane scene;
The lane line and lane line endpoints that include in the multiple lane line sample image are labeled respectively;
According to the multiple lane line sample image and its mark, and the definition of the bounding box to lane line endpoints, benefit With the algorithm of target detection based on convolutional neural networks, training bounding box regression model;
Using the bounding box regression model trained, target detection is carried out in images to be recognized.
Optionally, it is described according to the target detection as a result, determine the position of the lane line endpoints, specifically include:
According to the central point of the bounding box of the lane line endpoints identified, the position of the lane line endpoints is determined.
Optionally, the lane line is dotted line lane line.
A kind of lane line endpoints identification device, comprising:
Definition module is defined according to the lane line for including in lane line sample image for confining the lane line sample The bounding box for the lane line endpoints for including in image;
Identification module utilizes the mesh based on convolutional neural networks according to the definition of the bounding box to the lane line endpoints Detection algorithm is marked, carries out target detection, in images to be recognized to identify the bounding box of lane line endpoints;
Determination module, according to the target detection as a result, determining the position of the lane line endpoints.
Optionally, the determination module is according to the target detection as a result, determining the position of the lane line endpoints, specifically Include:
The determination module utilize image segmentation algorithm, carry out image segmentation in the bounding box identified, with into The segmentation of row prospect and background;
According to the target detection as a result, and described image segmentation as a result, determining the position of the lane line endpoints.
Optionally, for the definition module according to the lane line for including in lane line sample image, definition is described for confining The bounding box for the lane line endpoints for including in lane line sample image, specifically includes:
The definition module defines the bounding box for confining the lane line for including in lane line sample image;
According to the width and/or height of the bounding box of the lane line, definition is wrapped for confining in the lane line sample image The bounding box of the lane line endpoints contained.
Optionally, the definition module defines described for confining according to the width and/or height of the bounding box of the lane line The bounding box for the lane line endpoints for including in lane line sample image, further includes:
The definition module limits the full-size of the bounding box of the lane line endpoints according to preset size threshold.
Optionally, the shape of the bounding box of the lane line endpoints is square, the side length of the square be not more than with Under minimum value in several persons: the size threshold, the width of the bounding box of the lane line, height.
Optionally, the identification module is according to the definition of the bounding box to the lane line endpoints, using based on convolution mind Algorithm of target detection through network carries out target detection in images to be recognized, specifically includes:
The identification module obtains multiple lane line sample images at least one lane scene;
The lane line and lane line endpoints that include in the multiple lane line sample image are labeled respectively;
According to the multiple lane line sample image and its mark, and the definition of the bounding box to lane line endpoints, benefit With the algorithm of target detection based on convolutional neural networks, training bounding box regression model;
Using the bounding box regression model trained, target detection is carried out in images to be recognized.
Optionally, the determination module is according to the target detection as a result, determining the position of the lane line endpoints, specifically Include:
The central point of the bounding box for the lane line endpoints that the determination module is identified according to determines the lane line end The position of point.
Optionally, the lane line is dotted line lane line.
A kind of lane line endpoints identification equipment, comprising:
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one A processor executes so that at least one described processor can:
According to the lane line for including in lane line sample image, definition includes for confining in the lane line sample image Lane line endpoints bounding box;
According to the definition of the bounding box to the lane line endpoints, calculated using the target detection based on convolutional neural networks Method carries out target detection in images to be recognized, to identify the bounding box of lane line endpoints;
According to the target detection as a result, determining the position of the lane line endpoints.
A kind of lane line endpoints identification nonvolatile computer storage media, is stored with computer executable instructions, described Computer executable instructions setting are as follows:
According to the lane line for including in lane line sample image, definition includes for confining in the lane line sample image Lane line endpoints bounding box;
According to the definition of the bounding box to the lane line endpoints, calculated using the target detection based on convolutional neural networks Method carries out target detection in images to be recognized, to identify the bounding box of lane line endpoints;
According to the target detection as a result, determining the position of the lane line endpoints.
At least one above-mentioned technical solution that the embodiment of the present application uses can reach following the utility model has the advantages that by according to vehicle Diatom be lane line endpoints define suitable bounding box, based on bounding box return carry out target detection, facilitate accurately to It identifies and identifies lane line endpoints in image, determine its position.
Detailed description of the invention
The drawings described herein are used to provide a further understanding of the present application, constitutes part of this application, this Shen Illustrative embodiments and their description please are not constituted an undue limitation on the present application for explaining the application.In the accompanying drawings:
Fig. 1 is a kind of flow diagram for lane line endpoints recognition methods that some embodiments of the present application provide;
Fig. 2 is a kind of dotted line lane line endpoints and its bounding box schematic diagram that some embodiments of the present application provide;
Fig. 3 is a kind of detailed process signal for the above-mentioned lane line endpoints recognition methods that some embodiments of the present application provide Figure;
Fig. 4 is a kind of structure for lane line endpoints identification device corresponding to Fig. 1 that some embodiments of the present application provide Schematic diagram;
Fig. 5 is the structure that a kind of lane line endpoints corresponding to Fig. 1 that some embodiments of the present application provide identify equipment Schematic diagram.
Specific embodiment
To keep the purposes, technical schemes and advantages of the application clearer, below in conjunction with the application specific embodiment and Technical scheme is clearly and completely described in corresponding attached drawing.Obviously, described embodiment is only the application one Section Example, instead of all the embodiments.Based on the embodiment in the application, those of ordinary skill in the art are not doing Every other embodiment obtained under the premise of creative work out, shall fall in the protection scope of this application.
In some embodiments of the present application, the definition mode of the bounding box for lane line endpoints is proposed, is defined Bounding box be different from the bounding box of the existing object (automobile, aircraft etc.) for occupying boxed area, existing boundaries frame It is generally necessary to object edge is relatively accurately approached with rectangle small as far as possible, and application-defined bounding box can not have this Limitation, on the one hand because object is endpoint, on the other hand because bounding box is referred to corresponding lane line and is defined.
The bounding box of lane line endpoints based on definition, can be by bounding box regression training model, in lane line The bounding box of lane line endpoints is identified in images to be recognized other than sample image and sample, and determines lane line endpoints Position.In order to improve identification accuracy, the processing such as image segmentation, image enhancement can also be further carried out, then synthetically sentence Determine the position of lane line endpoints.The scheme of the application is described in detail below.
Fig. 1 is a kind of flow diagram for lane line endpoints recognition methods that some embodiments of the present application provide.At this In process, for equipment angle, executing subject can be one or more and calculate equipment, for example, individual machine study clothes Business device, machine learning server cluster, image segmentation server etc., for program angle, executing subject correspondingly be can be It is equipped on these and calculates the program in equipment, for example, neural net model establishing platform, image processing platform etc..
Process in Fig. 1 may comprise steps of:
S102: it according to the lane line for including in lane line sample image, defines for confining the lane line sample image In include lane line endpoints bounding box.
In some embodiments of the present application, generally, the lane line for including in each lane line image have one or Two lane line endpoints.Lane line can be miscellaneous, depends on practical identification and needs, for example can be dotted line lane line Perhaps solid line lane line can be bicycle diatom perhaps two-way traffic line can be white lane line or yellow lane line etc..? In practical application, dotted line lane line is more difficult accurately to identify its lane line end since discrete a plurality of line segment is constituted Point, and the scheme of the application can also reach preferable recognition effect for dotted line lane line endpoints, below some embodiment masters It to be illustrated so that the lane line in Fig. 1 is dotted line lane line as an example.
In some embodiments of the present application, lane line sample image have it is multiple, for training corresponding machine learning mould Type, the machine learning model are at least used to detect the bounding box of lane line endpoints based on the definition to bounding box.It can be according to more Kind of factor, defines the bounding box of lane line endpoints, which is such as lane line itself, other objects compares in image Example, preset size threshold, lane line endpoints degree etc. at a distance from image border.
S104: it according to the definition of the bounding box to the lane line endpoints, is examined using the target based on convolutional neural networks Method of determining and calculating (refers mainly to the image other than sample image, for example, freshly harvested pavement of road figure to be identified in images to be recognized As etc.) in carry out target detection, to identify the bounding box of lane line endpoints.
In some embodiments of the present application, images to be recognized is carried out at subregional part based on convolutional neural networks Reason can be more quasi- further according to multiple Local treatments as a result, obtaining disposed of in its entirety as a result, for the lesser lane line endpoints of target Really extract bounding box.
S106: according to the target detection as a result, determining the position of the lane line endpoints.
In some embodiments of the present application, by target detection, after the bounding box for identifying lane line endpoints, Ke Yizhi It connects according to the bounding box, the position of lane line endpoints is determined, for example, by the central point position or the side of the bounding box Arbitrary point position in the intermediate region of boundary's frame, is determined as the position of lane line endpoints;Alternatively, can also be using other calculations Method further identifies in the bounding box, to determine the position of lane line endpoints.
It is returned by being that lane line endpoints define suitable bounding box according to lane line based on bounding box by the method for Fig. 1 Return carry out target detection, helps accurately to identify lane line endpoints in images to be recognized, determine its position.
Method based on Fig. 1, some embodiments of the present application additionally provide some specific embodiments of this method, and Expansion scheme is illustrated below.
In some embodiments of the present application, for step S106, it is described according to the target detection as a result, determine should The position of lane line endpoints, for example may include: to carry out image in the bounding box identified using image segmentation algorithm Segmentation, to carry out the segmentation of prospect and background;According to described image segmentation as a result, the target detection result and institute The combination for stating the result of image segmentation determines the position of the lane line endpoints.By taking latter approach as an example, for example, can will know Not Chu at least one foreground pixel coordinate for being obtained with image segmentation of center point coordinate of bounding box of lane line be averaged, Obtained coordinate is determined as to the position of the lane line endpoints.
Foreground pixel such as can be lane line pixel, can be more specifically lane line edge pixel, background pixel can Think the road surface pixel other than lane line.Image segmentation can be using the model realization trained accordingly, if the model training The mark of used sample (bounding box image of lane line etc.) is accurate enough (for example, being accurate to lane line endpoints pixel), Then may directly it be split using lane line endpoints as prospect.For example, image, semantic partitioning algorithm can be used, image is carried out Segmentation helps to obtain more accurate segmentation result.
Algorithm used by above-mentioned target detection and image segmentation is not specifically limited here, can use existing algorithm Or it is adapted to the algorithm etc. that practical scene improves, it is able to achieve described effect.For example, using MASK RCNN Algorithm etc..
In some embodiments of the present application, it is assumed that the bounding box that lane line is defined according to lane line itself, then for step Rapid S102, it is described according to the lane line for including in lane line sample image, it defines for confining in the lane line sample image The bounding box for the lane line endpoints for including, for example may include: definition for confining the lane for including in lane line sample image The bounding box of line;According to the width and/or height of the bounding box of the lane line, define for confining in the lane line sample image The bounding box for the lane line endpoints for including.The bounding box of lane line can refer to the bounding box of each line segment in dotted line lane line, It can also refer to the bounding box of lane line entirety, the method for determination of the bounding box of lane line is referred to: to automobile, aircraft etc. The method of determination of the bounding box of the obvious object of profile.
Lane line endpoints lane line where with it be it is directly related, in the bounding box for defining lane line endpoints, referring to should The size of place lane line is more reasonable, helps so that in every lane line sample image, defined bounding box may not It is onesize, but compared to lane line where it, size is suitable, is conducive to the feature for more effectively extracting end region.
For example, the bounding box of lane line can be determined first, obtain the width and height of the bounding box, then wide and senior middle school take compared with Small value defines the width and/or height of the bounding box of lane line endpoints according to the smaller value.
Further, it is contemplated that, then may according to its bounding box when accounting is relatively large in the picture for lane line itself Lane line endpoints bounding box bigger than normal is defined, it, can be with predetermined size threshold value, for limiting the vehicle for this problem The full-size of the bounding box of road line endpoints.Size threshold such as can be the identical value for the unified setting of each samples pictures (for example, being set as being no more than 50 pixels etc. for the wide, high of bounding box), it is also possible to be adapted to each samples pictures The self-adapting changeable value that size is set separately is (for example, width, the high width being set as no more than corresponding image for bounding box With the 20 of senior middle school minimum value/first-class).
In some embodiments of the present application, the bounding box of lane line endpoints can be defined as to rectangle, but can also be with The bounding box definition of lane line endpoints is square, square symmetry is better, can also reduce the size ginseng of bounding box Several numbers (the two wide and high parameters merge into this parameter of side length), economize on resources.In addition, in certain images, lane Line endpoints may be very close to image border, in this case, if bounding box definition is square, the side length of square can Can be too small, it is unfavorable for extracting the feature in bounding box, in such a case, it is possible to bounding box is defined as rectangle, on side The vertical direction of edge extracts feature as far as possible.
More intuitively, some embodiments of the present application provide a kind of dotted line lane line endpoints and its bounding box schematic diagram, As shown in Figure 2.Fig. 2 shows dotted line lane lines, and illustratively denote the dotted line lane line with dashed square box The bounding box of two endpoints respectively, and the central point (being indicated with cross) of bounding box can be considered as the endpoint.
In some embodiments of the present application, for step S102, bounding box of the basis to the lane line endpoints Definition carry out target detection in images to be recognized using the algorithm of target detection based on convolutional neural networks, such as can be with It include: the multiple lane line sample images obtained at least one lane scene;To being wrapped in the multiple lane line sample image The lane line and lane line endpoints contained is labeled respectively;According to the multiple lane line sample image and its mark, and Definition to the bounding box of lane line endpoints, using the algorithm of target detection based on convolutional neural networks, training bounding box is returned Model;Using the bounding box regression model trained, target detection is carried out in images to be recognized.
Certain features that lane line image under different lane scenes may have scene to limit distinguish lane lane Scene facilitates subsequent more accurately identification lane line endpoints.Lane scene can identify requirement definition according to practical, for example, single Lane scene, crossroad scene, the crossing scene that turns around, actual situation two-wire scene, two-way multilane scene, bend scene etc., this In be not specifically limited, only citing help understand.
According to explanation above, some embodiments of the present application additionally provide one kind of above-mentioned lane line endpoints recognition methods Detailed process, as shown in Figure 3.
Step in Fig. 3 may comprise steps of:
S302: the great amount of samples image under various lane scenes including dotted line lane line is collected.
S304: according to lane scene, to dotted line lane line itself and it includes lane line endpoints be labeled.
S306: are defined by one and is directed to lane according to corresponding dotted line lane line size itself for each lane line endpoints The bounding box of line endpoints, being specifically defined is: the bounding box is square, and the side length of the bounding box is preset size threshold, right The width of the bounding box for the dotted line lane line answered and the minimum value of senior middle school, for example it is expressed as min (min (w, h), 50), min () table Show and be minimized function, w and h respectively indicate the width and height, and 50 indicate the size threshold.
S308: utilizing algorithm of target detection, identifies the bounding box of lane line endpoints, the central point of the bounding box identified can To be considered as lane line endpoints.
S310: in the bounding box identified, image, semantic partitioning algorithm is further utilized, prospect and back are partitioned into Scape, wherein prospect is considered as lane line endpoints pixel, and background is considered as other pixels.
S312: according to image, semantic segmentation result, lane line endpoints are extracted.
S314: combining target testing result and image, semantic segmentation result (for example, the two coordinate is averaged), most The position of lane line endpoints is determined eventually.
Based on same thinking, some embodiments of the present application additionally provide the corresponding device of the above method, equipment and non- Volatile computer storage medium.
Fig. 4 is a kind of structure for lane line endpoints identification device corresponding to Fig. 1 that some embodiments of the present application provide Schematic diagram, the device include:
Definition module 401 is defined according to the lane line for including in lane line sample image for confining the lane line sample The bounding box for the lane line endpoints for including in this image;
Identification module 402, according to the definition of the bounding box to the lane line endpoints, using based on convolutional neural networks Algorithm of target detection carries out target detection in images to be recognized, to identify the bounding box of lane line endpoints;
Determination module 403, according to the target detection as a result, determining the position of the lane line endpoints.
Optionally, the determination module 403 according to the target detection as a result, determine the position of the lane line endpoints, It specifically includes:
The determination module 403 utilizes image segmentation algorithm, carries out image segmentation in the bounding box identified, with The segmentation of carry out prospect and background;
According to the target detection as a result, and described image segmentation as a result, determining the position of the lane line endpoints.
Optionally, the definition module 401 is defined according to the lane line for including in lane line sample image for confining The bounding box for stating the lane line endpoints for including in lane line sample image, specifically includes:
The definition module 401 defines the bounding box for confining the lane line for including in lane line sample image;
According to the width and/or height of the bounding box of the lane line, definition is wrapped for confining in the lane line sample image The bounding box of the lane line endpoints contained.
Optionally, the definition module 401 is defined according to the width and/or height of the bounding box of the lane line for confining The bounding box for the lane line endpoints for including in the lane line sample image, further includes:
The definition module 401 limits the maximum ruler of the bounding box of the lane line endpoints according to preset size threshold It is very little.
Optionally, the shape of the bounding box of the lane line endpoints is square, the side length of the square be not more than with Under minimum value in several persons: the size threshold, the width of the bounding box of the lane line, height.
Optionally, the identification module 402 is according to the definition of the bounding box to the lane line endpoints, using being based on convolution The algorithm of target detection of neural network carries out target detection in images to be recognized, specifically includes:
The identification module 402 obtains multiple lane line sample images at least one lane scene;
The lane line and lane line endpoints that include in the multiple lane line sample image are labeled respectively;
According to the multiple lane line sample image and its mark, and the definition of the bounding box to lane line endpoints, benefit With the algorithm of target detection based on convolutional neural networks, training bounding box regression model;
Using the bounding box regression model trained, target detection is carried out in images to be recognized.
Optionally, the determination module 403 according to the target detection as a result, determine the position of the lane line endpoints, It specifically includes:
The determination module 403 determines the lane according to the central point of the bounding box of the lane line endpoints identified The position of line endpoints.
Optionally, the lane line is dotted line lane line.
Fig. 5 is the structure that a kind of lane line endpoints corresponding to Fig. 1 that some embodiments of the present application provide identify equipment Schematic diagram, the equipment include:
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one A processor executes so that at least one described processor can:
According to the lane line for including in lane line sample image, definition includes for confining in the lane line sample image Lane line endpoints bounding box;
According to the definition of the bounding box to the lane line endpoints, calculated using the target detection based on convolutional neural networks Method carries out target detection in images to be recognized, to identify the bounding box of lane line endpoints;
According to the target detection as a result, determining the position of the lane line endpoints.
A kind of lane line endpoints identification non-volatile computer corresponding to Fig. 1 that some embodiments of the present application provide is deposited Storage media is stored with computer executable instructions, computer executable instructions setting are as follows:
According to the lane line for including in lane line sample image, definition includes for confining in the lane line sample image Lane line endpoints bounding box;
According to the definition of the bounding box to the lane line endpoints, calculated using the target detection based on convolutional neural networks Method carries out target detection in images to be recognized, to identify the bounding box of lane line endpoints;
According to the target detection as a result, determining the position of the lane line endpoints.
Various embodiments are described in a progressive manner in the application, same and similar part between each embodiment It may refer to each other, each embodiment focuses on the differences from other embodiments.Especially for device, set For standby and media embodiment, since it is substantially similar to the method embodiment, so be described relatively simple, related place referring to The part of embodiment of the method illustrates.
Device, equipment and medium provided by the embodiments of the present application and method be it is one-to-one, therefore, device, equipment and The advantageous effects that medium also has corresponding method similar, due to above to the advantageous effects of method into Go detailed description, therefore, the advantageous effects of which is not described herein again device, equipment and medium.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more, The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces The form of product.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates, Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one The step of function of being specified in a box or multiple boxes.
In a typical configuration, calculating equipment includes one or more processors (CPU), input/output interface, net Network interface and memory.
Memory may include the non-volatile memory in computer-readable medium, random access memory (RAM) and/or The forms such as Nonvolatile memory, such as read-only memory (ROM) or flash memory (flash RAM).Memory is computer-readable medium Example.
Computer-readable medium includes permanent and non-permanent, removable and non-removable media can be by any method Or technology come realize information store.Information can be computer readable instructions, data structure, the module of program or other data. The example of the storage medium of computer includes, but are not limited to phase change memory (PRAM), static random access memory (SRAM), moves State random access memory (DRAM), other kinds of random access memory (RAM), read-only memory (ROM), electric erasable Programmable read only memory (EEPROM), flash memory or other memory techniques, read-only disc read only memory (CD-ROM) (CD-ROM), Digital versatile disc (DVD) or other optical storage, magnetic cassettes, tape magnetic disk storage or other magnetic storage devices Or any other non-transmission medium, can be used for storage can be accessed by a computing device information.As defined in this article, it calculates Machine readable medium does not include temporary computer readable media (transitory media), such as the data-signal and carrier wave of modulation.
It should also be noted that, the terms "include", "comprise" or its any other variant are intended to nonexcludability It include so that the process, method, commodity or the equipment that include a series of elements not only include those elements, but also to wrap Include other elements that are not explicitly listed, or further include for this process, method, commodity or equipment intrinsic want Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including described want There is also other identical elements in the process, method of element, commodity or equipment.
The above description is only an example of the present application, is not intended to limit this application.For those skilled in the art For, various changes and changes are possible in this application.All any modifications made within the spirit and principles of the present application are equal Replacement, improvement etc., should be included within the scope of the claims of this application.

Claims (10)

1. a kind of lane line endpoints recognition methods characterized by comprising
According to the lane line for including in lane line sample image, define for confining the vehicle for including in the lane line sample image The bounding box of road line endpoints;
According to the definition of the bounding box to the lane line endpoints, using the algorithm of target detection based on convolutional neural networks, Target detection is carried out in images to be recognized, to identify the bounding box of lane line endpoints;
According to the target detection as a result, determining the position of the lane line endpoints.
2. the method as described in claim 1, which is characterized in that it is described according to the target detection as a result, determining the lane The position of line endpoints, specifically includes:
Using image segmentation algorithm, image segmentation is carried out in the bounding box identified, to carry out point of prospect and background It cuts;
According to the target detection as a result, and described image segmentation as a result, determining the position of the lane line endpoints.
3. the method as described in claim 1, which is characterized in that it is described according to the lane line for including in lane line sample image, The bounding box for confining the lane line endpoints for including in the lane line sample image is defined, is specifically included:
Define the bounding box for confining the lane line for including in lane line sample image;
According to the width and/or height of the bounding box of the lane line, definition includes for confining in the lane line sample image The bounding box of lane line endpoints.
4. method as claimed in claim 3, which is characterized in that the width and/or height of the bounding box according to the lane line, Define the bounding box for confining the lane line endpoints for including in the lane line sample image, further includes:
According to preset size threshold, the full-size of the bounding box of the lane line endpoints is limited.
5. method as claimed in claim 4, which is characterized in that the shape of the bounding box of the lane line endpoints is square, The side length of the square is no more than the minimum value in following several persons: the size threshold, the lane line bounding box width, It is high.
6. a kind of lane line endpoints identification device characterized by comprising
Definition module is defined according to the lane line for including in lane line sample image for confining the lane line sample image In include lane line endpoints bounding box;
Identification module is examined according to the definition of the bounding box to the lane line endpoints using the target based on convolutional neural networks Method of determining and calculating carries out target detection in images to be recognized, to identify the bounding box of lane line endpoints;
Determination module, according to the target detection as a result, determining the position of the lane line endpoints.
7. device as claimed in claim 6, which is characterized in that the determination module is according to the target detection as a result, sentencing The position of the fixed lane line endpoints, specifically includes:
The determination module utilizes image segmentation algorithm, carries out image segmentation in the bounding box identified, before carrying out The segmentation of scape and background;
According to the target detection as a result, and described image segmentation as a result, determining the position of the lane line endpoints.
8. device as claimed in claim 6, which is characterized in that the definition module includes according in lane line sample image Lane line defines the bounding box for confining the lane line endpoints for including in the lane line sample image, specifically includes:
The definition module defines the bounding box for confining the lane line for including in lane line sample image;
According to the width and/or height of the bounding box of the lane line, definition includes for confining in the lane line sample image The bounding box of lane line endpoints.
9. device as claimed in claim 8, which is characterized in that the definition module is according to the width of the bounding box of the lane line And/or it is high, define the bounding box for confining the lane line endpoints for including in the lane line sample image, further includes:
The definition module limits the full-size of the bounding box of the lane line endpoints according to preset size threshold.
10. a kind of lane line endpoints identify equipment characterized by comprising
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one Manage device execute so that at least one described processor can:
According to the lane line for including in lane line sample image, define for confining the vehicle for including in the lane line sample image The bounding box of road line endpoints;
According to the definition of the bounding box to the lane line endpoints, using the algorithm of target detection based on convolutional neural networks, Target detection is carried out in images to be recognized, to identify the bounding box of lane line endpoints;
According to the target detection as a result, determining the position of the lane line endpoints.
CN201811478746.8A 2018-12-05 2018-12-05 Lane line end point identification method and device, equipment and medium Active CN109583393B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811478746.8A CN109583393B (en) 2018-12-05 2018-12-05 Lane line end point identification method and device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811478746.8A CN109583393B (en) 2018-12-05 2018-12-05 Lane line end point identification method and device, equipment and medium

Publications (2)

Publication Number Publication Date
CN109583393A true CN109583393A (en) 2019-04-05
CN109583393B CN109583393B (en) 2023-08-11

Family

ID=65926316

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811478746.8A Active CN109583393B (en) 2018-12-05 2018-12-05 Lane line end point identification method and device, equipment and medium

Country Status (1)

Country Link
CN (1) CN109583393B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110688971A (en) * 2019-09-30 2020-01-14 上海商汤临港智能科技有限公司 Method, device and equipment for detecting dotted lane line
CN112053407A (en) * 2020-08-03 2020-12-08 杭州电子科技大学 Automatic lane line detection method based on AI technology in traffic law enforcement image
CN113449648A (en) * 2021-06-30 2021-09-28 北京纵目安驰智能科技有限公司 Method, system, equipment and computer readable storage medium for detecting indicator line
WO2022028383A1 (en) * 2020-08-06 2022-02-10 长沙智能驾驶研究院有限公司 Lane line labeling method, detection model determining method, lane line detection method, and related device
EP4047520A1 (en) * 2021-02-23 2022-08-24 Beijing Tusen Zhitu Technology Co., Ltd. Method and apparatus for detecting corner points of lane lines, electronic device and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036246A (en) * 2014-06-10 2014-09-10 电子科技大学 Lane line positioning method based on multi-feature fusion and polymorphism mean value
CN104036253A (en) * 2014-06-20 2014-09-10 智慧城市系统服务(中国)有限公司 Lane line tracking method and lane line tracking system
CN105740782A (en) * 2016-01-25 2016-07-06 北京航空航天大学 Monocular vision based driver lane-changing process quantization method
US20160229399A1 (en) * 2015-02-10 2016-08-11 Honda Motor Co., Ltd. Vehicle travel support system and vehicle travel support method
CN106663207A (en) * 2014-10-29 2017-05-10 微软技术许可有限责任公司 Whiteboard and document image detection method and system
CN106682646A (en) * 2017-01-16 2017-05-17 北京新能源汽车股份有限公司 Method and apparatus for recognizing lane line
CN108090456A (en) * 2017-12-27 2018-05-29 北京初速度科技有限公司 A kind of Lane detection method and device
CN108545019A (en) * 2018-04-08 2018-09-18 多伦科技股份有限公司 A kind of safety driving assist system and method based on image recognition technology
US20180293447A1 (en) * 2017-04-05 2018-10-11 Denso Corporation Road parameter calculator

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036246A (en) * 2014-06-10 2014-09-10 电子科技大学 Lane line positioning method based on multi-feature fusion and polymorphism mean value
CN104036253A (en) * 2014-06-20 2014-09-10 智慧城市系统服务(中国)有限公司 Lane line tracking method and lane line tracking system
CN106663207A (en) * 2014-10-29 2017-05-10 微软技术许可有限责任公司 Whiteboard and document image detection method and system
US20160229399A1 (en) * 2015-02-10 2016-08-11 Honda Motor Co., Ltd. Vehicle travel support system and vehicle travel support method
CN105740782A (en) * 2016-01-25 2016-07-06 北京航空航天大学 Monocular vision based driver lane-changing process quantization method
CN106682646A (en) * 2017-01-16 2017-05-17 北京新能源汽车股份有限公司 Method and apparatus for recognizing lane line
US20180293447A1 (en) * 2017-04-05 2018-10-11 Denso Corporation Road parameter calculator
CN108090456A (en) * 2017-12-27 2018-05-29 北京初速度科技有限公司 A kind of Lane detection method and device
CN108545019A (en) * 2018-04-08 2018-09-18 多伦科技股份有限公司 A kind of safety driving assist system and method based on image recognition technology

Non-Patent Citations (12)

* Cited by examiner, † Cited by third party
Title
BRODY HUVAL等: "An Empirical Evaluation of Deep Learning on Highway Driving", 《ARXIV.ORG》 *
BRODY HUVAL等: "An Empirical Evaluation of Deep Learning on Highway Driving", 《ARXIV.ORG》, 17 April 2015 (2015-04-17), pages 1 - 7, XP055392563 *
CEM ÜNSALAN等: "Road Network Detection Using Probabilistic and Graph Theoretical Methods", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》 *
CEM ÜNSALAN等: "Road Network Detection Using Probabilistic and Graph Theoretical Methods", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》, vol. 50, no. 11, 17 April 2012 (2012-04-17), pages 4441 - 4453, XP011472361, DOI: 10.1109/TGRS.2012.2190078 *
宋洪军等: "基于车道线检测与图像拐点的道路能见度估计", 《计算机应用》 *
宋洪军等: "基于车道线检测与图像拐点的道路能见度估计", 《计算机应用》, no. 12, 26 January 2013 (2013-01-26), pages 3397 - 3403 *
张晓林: "基于深度学习的证件照人脸识别方法", 《计算机系统应用》 *
张晓林: "基于深度学习的证件照人脸识别方法", 《计算机系统应用》, vol. 27, no. 5, 15 May 2018 (2018-05-15), pages 203 - 208 *
郭剑鹰等: "一种基于机器学习的ADAS车道类型判别方法", 《汽车电器》 *
郭剑鹰等: "一种基于机器学习的ADAS车道类型判别方法", 《汽车电器》, 20 December 2017 (2017-12-20), pages 22 - 24 *
金月等: "基于暗通道先验的实时低能见度辅助驾驶系统", 《华东理工大学学报(自然科学版)HTTPS://DOI.ORG/10.14135/J.CNKI.1006-3080.20180118002》 *
金月等: "基于暗通道先验的实时低能见度辅助驾驶系统", 《华东理工大学学报(自然科学版)HTTPS://DOI.ORG/10.14135/J.CNKI.1006-3080.20180118002》, 19 June 2018 (2018-06-19), pages 1 - 8 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110688971A (en) * 2019-09-30 2020-01-14 上海商汤临港智能科技有限公司 Method, device and equipment for detecting dotted lane line
WO2021063228A1 (en) * 2019-09-30 2021-04-08 上海商汤临港智能科技有限公司 Dashed lane line detection method and device, and electronic apparatus
CN110688971B (en) * 2019-09-30 2022-06-24 上海商汤临港智能科技有限公司 Method, device and equipment for detecting dotted lane line
CN112053407A (en) * 2020-08-03 2020-12-08 杭州电子科技大学 Automatic lane line detection method based on AI technology in traffic law enforcement image
CN112053407B (en) * 2020-08-03 2024-04-09 杭州电子科技大学 Automatic lane line detection method based on AI technology in traffic law enforcement image
WO2022028383A1 (en) * 2020-08-06 2022-02-10 长沙智能驾驶研究院有限公司 Lane line labeling method, detection model determining method, lane line detection method, and related device
EP4047520A1 (en) * 2021-02-23 2022-08-24 Beijing Tusen Zhitu Technology Co., Ltd. Method and apparatus for detecting corner points of lane lines, electronic device and storage medium
CN113449648A (en) * 2021-06-30 2021-09-28 北京纵目安驰智能科技有限公司 Method, system, equipment and computer readable storage medium for detecting indicator line

Also Published As

Publication number Publication date
CN109583393B (en) 2023-08-11

Similar Documents

Publication Publication Date Title
CN109583393A (en) A kind of lane line endpoints recognition methods and device, equipment, medium
CN110287276A (en) High-precision map updating method, device and storage medium
WO2018068653A1 (en) Point cloud data processing method and apparatus, and storage medium
Mattyus et al. Enhancing road maps by parsing aerial images around the world
JP6259928B2 (en) Lane data processing method, apparatus, storage medium and equipment
KR102106359B1 (en) Method and apparatus for identifying intersections in an electronic map
Huang et al. Spatial-temproal based lane detection using deep learning
CN108875595A (en) A kind of Driving Scene object detection method merged based on deep learning and multilayer feature
US20130243343A1 (en) Method and device for people group detection
CN110119148A (en) A kind of six-degree-of-freedom posture estimation method, device and computer readable storage medium
CN109584294A (en) A kind of road surface data reduction method and apparatus based on laser point cloud
CN103206957B (en) The lane detection and tracking method of vehicular autonomous navigation
CN106327576B (en) A kind of City scenarios method for reconstructing and system
CN109858349A (en) A kind of traffic sign recognition method and its device based on improvement YOLO model
CN114581744A (en) Image target detection method, system, equipment and storage medium
CN115203352A (en) Lane level positioning method and device, computer equipment and storage medium
CN110299063B (en) Visual display method and device for trajectory data
CN109190687A (en) A kind of nerve network system and its method for identifying vehicle attribute
CN110188607A (en) A kind of the traffic video object detection method and device of multithreads computing
CN109711341A (en) A kind of virtual lane line recognition methods and device, equipment, medium
CN109800684A (en) The determination method and device of object in a kind of video
CN113269806B (en) Method, device and processor for measuring blood flow inside blood vessel
CN116129386A (en) Method, system and computer readable medium for detecting a travelable region
CN117011692A (en) Road identification method and related device
Kiran et al. Automatic hump detection and 3D view generation from a single road image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: Room 108-27, Building 1, No. 611 Yunxiu South Road, Wuyang Street, Deqing County, Huzhou City, Zhejiang Province, 313200 (Moganshan National High tech Zone)

Patentee after: Kuandong (Huzhou) Technology Co.,Ltd.

Address before: 811, 8 / F, 101, 3-8 / F, building 17, rongchuang Road, Chaoyang District, Beijing 100012

Patentee before: KUANDENG (BEIJING) TECHNOLOGY Co.,Ltd.

CP03 Change of name, title or address
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20190405

Assignee: Zhejiang Kuandong Yuntu Technology Co.,Ltd.

Assignor: Kuandong (Huzhou) Technology Co.,Ltd.

Contract record no.: X2024980001061

Denomination of invention: A lane line endpoint recognition method, device, equipment, and medium

Granted publication date: 20230811

License type: Common License

Record date: 20240119

EE01 Entry into force of recordation of patent licensing contract