CN113392793A - Method, device, equipment, storage medium and unmanned vehicle for identifying lane line - Google Patents

Method, device, equipment, storage medium and unmanned vehicle for identifying lane line Download PDF

Info

Publication number
CN113392793A
CN113392793A CN202110718085.7A CN202110718085A CN113392793A CN 113392793 A CN113392793 A CN 113392793A CN 202110718085 A CN202110718085 A CN 202110718085A CN 113392793 A CN113392793 A CN 113392793A
Authority
CN
China
Prior art keywords
lane
image
lane line
recognized
feature extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110718085.7A
Other languages
Chinese (zh)
Inventor
何悦
张伟
谭啸
孙昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110718085.7A priority Critical patent/CN113392793A/en
Publication of CN113392793A publication Critical patent/CN113392793A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the disclosure discloses a method, a device, equipment, a storage medium and an unmanned vehicle for identifying lane lines, relates to the field of artificial intelligence, in particular to a computer vision and deep learning technology, and can be applied to scenes such as intelligent transportation and smart cities. One embodiment of the method comprises: acquiring an image to be identified; inputting the images to be recognized into a pre-trained lane line recognition model to obtain a lane line recognition result, wherein the lane line recognition model comprises a feature extraction model obtained through unsupervised training and a segmentation model obtained through supervised training, the feature extraction model is used for extracting the features of the images to be recognized, and the segmentation model is used for carrying out example segmentation according to the features of the images to be recognized to recognize lane lines. The embodiment realizes lane line identification based on self-supervision learning and example segmentation.

Description

Method, device, equipment, storage medium and unmanned vehicle for identifying lane line
Technical Field
The disclosure relates to the field of artificial intelligence, in particular to a computer vision and deep learning technology, which can be applied to intelligent transportation, smart cities and other scenes.
Background
With the rapid development of computer technology and internet technology in recent years, unmanned technology has come to be applied to realize automatic driving. The implementation of unmanned technology involves a variety of traffic information detection and identification tasks, including but not limited to obstacle detection, traffic signal detection and identification, communicable space detection and identification, and so on. Among them, the detection and identification of lane lines is also one of the main tasks in the field of unmanned driving technology.
The existing lane line detection and identification methods mainly include a method based on deep learning, a method based on projection or dashed line fitting, and the like. Among them, the lane line detection and identification method based on deep learning generally depends on an Anchor set in advance for a lane line and various network parameters related to the Anchor.
Disclosure of Invention
Embodiments of the present disclosure propose methods, apparatuses, devices, storage media, program products, and unmanned vehicles for identifying lane lines.
In a first aspect, an embodiment of the present disclosure provides a method for identifying a lane line, the method including: acquiring an image to be identified; inputting the images to be recognized into a pre-trained lane line recognition model to obtain a lane line recognition result, wherein the lane line recognition model comprises a feature extraction model obtained through unsupervised training and a segmentation model obtained through supervised training, the feature extraction model is used for extracting the features of the images to be recognized, and the segmentation model is used for carrying out example segmentation according to the features of the images to be recognized to recognize lane lines.
In a second aspect, an embodiment of the present disclosure provides an apparatus for identifying a lane line, the apparatus including: an image acquisition module configured to acquire an image to be recognized; the lane line recognition module is configured to input the image to be recognized to a pre-trained lane line recognition model to obtain a lane line recognition result, wherein the lane line recognition model comprises a feature extraction model obtained through unsupervised training and a segmentation model obtained through supervised training, the feature extraction model is used for extracting features of the image to be recognized, and the segmentation model is used for carrying out example segmentation according to the features of the image to be recognized to recognize a lane line.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described in any one of the implementations of the first aspect.
In a fourth aspect, the disclosed embodiments propose a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method as described in any one of the implementations of the first aspect.
In a fifth aspect, the present disclosure provides a computer program product including a computer program, which when executed by a processor implements the method as described in any implementation manner of the first aspect.
In a sixth aspect, an embodiment of the present disclosure provides an unmanned vehicle, which includes a camera shooting and collecting module and the electronic device described in the third aspect, where the camera shooting and collecting module is configured to collect a driving environment image as an image to be recognized during driving of the unmanned vehicle, and send the image to be recognized to the electronic device.
According to the method, the device, the equipment, the storage medium and the unmanned vehicle for recognizing the lane line, the feature extraction model is trained in advance through unsupervised learning, the segmentation model for recognizing the lane line according to the features is obtained through supervised training, and therefore the lane line recognition model is formed by the feature extraction model and the segmentation model so as to realize detection and recognition of the lane line in the image to be recognized. Therefore, the difficulty of marking the sample aiming at the feature extraction model can be reduced, the design process of Anchor and related parameters can be avoided by identifying the lane line based on the segmentation model, and the complexity of the lane line identification model is favorably reduced.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
Other features, objects, and advantages of the disclosure will become apparent from a reading of the following detailed description of non-limiting embodiments which proceeds with reference to the accompanying drawings. The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is an exemplary system architecture diagram in which the present disclosure may be applied;
FIG. 2 is a flow chart of one embodiment of a method of the present disclosure for identifying lane lines;
FIG. 3 is a flow chart of yet another embodiment of a method for identifying lane lines of an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of one application scenario of a method for identifying lane lines of an embodiment of the present disclosure;
FIG. 5 is a schematic structural diagram of one embodiment of an apparatus for identifying lane lines of the present disclosure;
fig. 6 is a block diagram of an electronic device for implementing a method for identifying lane lines of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 shows an exemplary architecture 100 to which embodiments of the method for identifying lane lines or the apparatus for identifying lane lines of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include an unmanned vehicle 101, a road 102, and a server 103. The unmanned vehicle 101, also known as an unmanned vehicle, may be various types of intelligent vehicles. Generally, the unmanned vehicle 101 may sense the surrounding environment through the vehicle-mounted sensing system, and automatically plan the driving route to control the driving of the unmanned vehicle.
The unmanned vehicle 101 may include an image capturing device such as a camera to capture an image of the road 102 during travel. The unmanned vehicle 101 may communicate with the server 103 to receive or transmit messages or the like by various means such as wire, wireless communication link, or fiber optic cable.
It should be noted that the method for identifying the lane line provided by the embodiment of the present disclosure is generally performed by the unmanned vehicle 101, and accordingly, the apparatus for identifying the lane line is generally provided in the unmanned vehicle 101. At this time, the unmanned vehicle 101 may analyze the acquired road image using a pre-trained lane line recognition model to recognize a lane line therein, thereby assisting in driving according to the recognition result. In this case, the exemplary system architecture 100 may not have a server 103.
It should be further noted that, after the unmanned vehicle 101 acquires the road image, the road image may be sent to the server 103, so that the server 103 analyzes the acquired road image by using a pre-trained lane line recognition model to recognize the lane line therein, and feeds back the lane line recognition result to the unmanned vehicle 101 to assist the unmanned vehicle in driving.
The server 103 may be hardware or software. When the server 103 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 103 is software, it may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of unmanned vehicles, roads, and servers in fig. 1 is merely illustrative. There may be any number of unmanned vehicles, roads, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a method of the present disclosure for identifying a lane line is shown. The method for identifying the lane line includes the steps of:
step 201, acquiring an image to be identified.
In the present embodiment, an executing subject of the method for identifying lane lines (e.g., an unmanned vehicle shown in fig. 1, etc.) may acquire an image to be identified from a local or other storage device (e.g., a connected database, a server, etc.).
Alternatively, the executing body may capture an image as the image to be recognized by using a preset image capturing device (such as a camera, a video recorder, or the like).
The image to be recognized may be various types of images. Generally, the image to be recognized may be a road image in order to recognize a lane line therein.
Step 202, inputting the image to be recognized into a pre-trained lane line recognition model to obtain a lane line recognition result.
In this embodiment, the executing subject may process the image to be recognized by using the lane line recognition model obtained through pre-training to obtain the lane line recognition result. Generally, the lane line recognition result may be used to indicate a position where a lane line included in the image to be recognized is located. The specific representation method of the lane line identification result can be flexibly set according to the actual application scene or application requirements.
The lane line recognition model includes a feature extraction model and a segmentation model. The feature extraction model can be used for extracting features of the image to be recognized. The segmentation model can be used for carrying out example segmentation on the image to be recognized according to the features extracted by the feature extraction model so as to recognize the lane lines in the image. Specifically, after the segmentation model performs instance segmentation on the image to be recognized, the lane line in the image to be recognized can be determined based on various methods such as sample interpolation, sample fitting, least square fitting and the like.
The feature extraction model may be constructed based on various existing feature extraction networks. For example, the feature extraction model may be a variety of convolutional neural networks. The feature extraction model can be obtained in advance through unsupervised training. Specifically, the feature extraction model can be obtained by using various existing unsupervised training methods.
The segmentation model can be constructed based on various existing segmentation networks (such as ERFNet) based on instance segmentation. The segmentation model can be obtained in advance through supervised training. Specifically, the training data with labels may be obtained first, and then the segmentation model is obtained through training based on a machine learning method.
Alternatively, the feature extraction model may employ a Residual Network (ResNet) as a backbone Network. In particular, the feature extraction model can flexibly adopt various existing residual error networks (such as ResNet 50) according to different application scenarios. The residual error net is connected through the jump of the residual error block, the problems of gradient disappearance and the like caused by the increase of the depth of the model can be relieved, and the accuracy of the model is improved.
Alternatively, the feature extraction model may be trained by using a contrast unsupervised learning (contrast unsupervised learning) method. For example, SimLR unsupervised training method.
The comparative unsupervised learning generally trains a model by using a large amount of public image data based on the principle of drawing the features of two views of the same image closer and pushing the features of the views generated by different images farther, so as to obtain better feature expression capability.
Alternatively, the feature extraction network may be obtained by training using an unsupervised training method of Momentum Contrast (MoCo). For example, feature extraction models may be constructed and trained based on methods such as MoCo V1 or MoCo V2.
As an example, an initial feature extraction model can be built using the MoCo V2 framework with ResNet50 as a Backbone, and unsupervised training can be performed using some of the lane line datasets disclosed to arrive at a trained feature extraction model.
In some optional implementation manners of this embodiment, the segmentation model may use a feature extraction result output by the trained feature extraction model as training data to complete training. Specifically, an unsupervised training method may be used to train to obtain a feature extraction model, and training data of the segmentation model may be obtained, where the training data may include an image to be recognized and a lane line recognition result in the image to be recognized, which is labeled in advance. Then, the trained feature extraction model can be used for processing the image to be recognized to obtain a feature extraction result, the feature extraction result is input into the initial segmentation model, then the difference between the lane line recognition result output by the initial segmentation model and the lane line recognition result which is labeled in advance can be calculated, and the parameters of the initial segmentation network are adjusted based on the methods of back propagation and gradient descent to complete the training of the initial segmentation model, so that the trained segmentation model is obtained.
The method provided by the embodiment of the disclosure realizes a lane line recognition method based on self-supervised learning and example segmentation, and specifically, a feature extraction model is trained through self-supervised learning, the trained feature extraction model is used as a pre-training model, an output feature extraction result is migrated to supervised training of a segmentation model to complete training of the segmentation model, and lane line recognition can be performed on an image to be recognized by using the trained feature extraction model and the segmentation model. Compared with the existing lane line identification method based on target detection, the lane line identification method provided by the disclosure does not need to preset a large number of anchors, and does not need to set a large number of network parameters related to the anchors in the model, so that the complexity and the calculated amount of the lane line identification model are reduced, meanwhile, the strong dependence of the lane line identification result on the Anchor design can be avoided, and the adaptability and the flexibility of the lane line identification model are improved.
With further reference to fig. 3, a flow 300 of yet another embodiment of a method of the present disclosure for identifying lane lines is shown. The method for identifying the lane line includes the steps of:
and 301, acquiring an image to be identified.
Step 302, inputting the image to be recognized into a pre-trained lane line recognition model to obtain a lane line recognition result.
In this embodiment, in some cases, two or more lane lines may be included in the image to be recognized. At this time, the obtained lane line recognition result may indicate that the image to be recognized includes at least two lane lines.
Step 303, selecting two adjacent lane lines from the at least two identified lane lines as a group, and generating a connected region formed by each group of lane lines as a lane region.
In this embodiment, two adjacent lane lines may be selected from the at least two identified lane lines to form a group, so as to obtain a plurality of groups of lane lines. Specifically, the lane lines may be sequentially arranged according to the positions of the identified lane lines in the image to be identified, and then any two adjacent lane lines are regarded as a group.
As an example, the identified lane lines are arranged in the order of the first lane line, the second lane line, and the third lane line according to the position. The first lane line and the second lane line may be taken as one group and the second lane line and the third lane line may be taken as another group, thereby obtaining two groups of lane lines.
Then, for each set of lane lines, a connected region formed by two lane lines in the set of lane lines may be generated as a lane region corresponding to the set of lane lines. Specifically, various methods may be employed to generate the connected regions constituted by each set of lane lines. For example, the end points of two lane lines may be directly connected to form a connected region.
Alternatively, the two lane lines may be respectively extended such that one ends of the two lane lines intersect at one point and the other ends of the two lane lines intersect with the boundary of the image to be recognized, respectively, and then a connected region formed between the two lane lines (including the extension lines) and the respective intersection points may be determined as a lane region.
Due to the perspective principle, adjacent lane lines in the actually shot road image can be intersected at a point in a distant place. Based on this, the lane area can be quickly determined by extending the adjacent lane lines.
After determining the lane area, the execution subject may further output the position of the lane area, and may also adjust its own travel path, travel strategy, or the like according to the determined lane area to assist its own safe travel.
In some optional implementation manners of this embodiment, after the lane area is identified based on the lane line identification result, a target detection result for a preset target corresponding to the lane area may be further obtained, and then the road condition information of the lane area is determined according to the obtained target detection result.
Wherein the preset target may be preset by a technician. The preset target may be various targets on the road in general, including but not limited to: obstacles, pedestrians, other vehicles, traffic lights, and the like. The target detection result may represent relevant information of the preset target, including but not limited to: the number or density of preset targets, the position of the preset targets in the lane area, etc.
Specifically, various existing target detection methods may be utilized to perform target detection on the lane area determined in the image to be recognized, so as to obtain a target detection result. The subject that performs the target detection may be the subject that performs the above-described method for identifying the lane line, or may be another electronic device. Correspondingly, the executing body may obtain a target detection result corresponding to the lane area from a local or other electronic device.
After the target detection result is obtained, the road condition information of the lane area can be further obtained according to the analysis of the target detection result. The traffic information may indicate various traffic conditions in the lane area, and may be specifically set according to an actual application scenario. For example, the road condition information may indicate whether a lane corresponding to the lane area is congested or not.
The traffic information of the lane area can be obtained by analyzing by various methods. For example, the road condition information of the lane area can be comprehensively judged according to the target detection results of the lane areas corresponding to the images to be recognized before and after the acquisition time corresponding to the images to be recognized. As an example, if the densities of pedestrians and vehicles indicated by the target detection result corresponding to each image to be recognized collected within a certain time period are both greater than a preset threshold, it may be considered that the road is in a congestion state within the time period.
After obtaining the traffic information, the execution main body may adjust a driving route or a driving policy of the execution main body according to the specific traffic information, so as to assist the safe driving of the execution main body.
With continued reference to fig. 4, fig. 4 is an exemplary application scenario 400 of the method for identifying lane lines according to the present embodiment. In the application scenario of fig. 4, during the driving of the unmanned vehicle 401 on the road, the lane image 402 may be continuously acquired. For each lane image 402, the features of the lane image may be extracted by inputting the extracted features to a feature extraction model 403 constructed based on MoCo V2, then inputting the extracted features to a segmentation model 404 constructed based on ERFNet to obtain a lane line recognition result 405, and then further analyzing the lane line to obtain a lane area recognition result 407.
Further, pedestrian and vehicle detection may also be performed on the lane images 402 to determine the distribution of pedestrians and vehicles over the lane area indicated by the lane area recognition results 407, resulting in the pedestrian and vehicle detection structure 406. Then, the congestion state of the lane area indicated by the lane area recognition result 407 may be analyzed by combining the pedestrian and vehicle detection structure 406 corresponding to each lane image 402 acquired by the unmanned vehicle within a period of time, so as to obtain a congestion determination result 408. Further, the unmanned vehicle 401 can adjust the traveling speed according to the congestion determination result 408.
According to the method provided by the embodiment of the disclosure, after the lane line identification result is obtained by using the lane line identification method of self-supervision learning and example segmentation, the lane area is further identified according to the lane line identification result, and the road condition information of the lane area is judged by combining the target detection result of the lane area, so that the unmanned vehicle can timely adjust the driving strategy according to the lane area identification result and the road condition information, and the driving safety and reliability are ensured.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an apparatus for identifying lane lines, which corresponds to the method embodiment shown in fig. 2, and which may be specifically applied to various electronic devices.
As shown in fig. 5, the lane line identification device 500 provided in this embodiment includes an image acquisition module 501 and a lane line identification module 502. The image obtaining module 501 obtains an image to be identified; the lane line recognition module 502 inputs the image to be recognized into a pre-trained lane line recognition model to obtain a lane line recognition result, wherein the lane line recognition model comprises a feature extraction model obtained through unsupervised training and a segmentation model obtained through supervised training, the feature extraction model is used for extracting the features of the image to be recognized, and the segmentation model is used for recognizing the lane line according to the features of the image to be recognized.
In the present embodiment, in the apparatus 500 for identifying a lane line: the specific processing of the image obtaining module 501 and the lane line identifying module 502 and the technical effects thereof can refer to the related descriptions of step 201 and step 202 in the corresponding embodiment of fig. 2, which are not repeated herein.
In some optional implementations of this embodiment, the lane line recognition result indicates that the image to be recognized includes at least two lane lines; and the above apparatus 500 for recognizing a lane line further includes: the lane area recognition module (not shown) is configured to select two adjacent lane lines from the at least two lane lines as a group, and generate a connected area formed by each group of lane lines as a lane area.
In some optional implementations of the present embodiment, the apparatus 500 for identifying a lane line further includes: the detection result acquisition module (not shown in the figure) is configured to acquire a target detection result for a preset target corresponding to the lane area; the determining module (not shown in the figure) is configured to determine the traffic information of the lane area according to the target detection result.
In some optional implementations of the present embodiment, the lane area identification module is further configured to: for each group of lane lines, respectively prolonging two lane lines in the group of lane lines so that the two lane lines are intersected at one point and are respectively intersected with the boundary of the image to be identified; and determining a communication area formed by each group of lane lines and the corresponding intersection points as a lane area.
In some optional implementation manners of this embodiment, the feature extraction model adopts an unsupervised training method that uses a residual error network as a backbone network and adopts momentum contrast.
In some optional implementations of this embodiment, the segmentation model uses a feature extraction result output by a trained feature extraction model as training data.
The present disclosure also provides an electronic device, a readable storage medium, a computer program product, and an unmanned vehicle according to embodiments of the present disclosure.
Wherein, unmanned car can include the collection module of making a video recording and the electronic equipment that this disclosure provided. The camera shooting acquisition module can be used for acquiring a driving environment image as an image to be identified in the driving process of the unmanned vehicle. The camera shooting and collecting module can be various image collecting devices such as a camera and the like. The running environment image may be an image of a road around the vehicle. The camera shooting acquisition module can send the image to be identified to the electronic equipment, so that the electronic equipment can identify the lane line by executing the method for identifying the lane line.
FIG. 6 illustrates a schematic block diagram of an example electronic device 600 that can be used to implement embodiments of the present disclosure. The electronic device is intended to represent various in-vehicle terminals. The electronic device may also represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 601 performs the respective methods and processes described above, such as a method for identifying a lane line. For example, in some embodiments, the method for identifying lane lines may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the method for identifying lane lines described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured by any other suitable means (e.g., by means of firmware) to perform the method for identifying lane lines.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in this disclosure may be performed in parallel or sequentially or in a different order, as long as the desired results of the technical solutions provided by this disclosure can be achieved, and are not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (16)

1. A method for identifying a lane line, comprising:
acquiring an image to be identified;
inputting the images to be recognized into a pre-trained lane line recognition model to obtain a lane line recognition result, wherein the lane line recognition model comprises a feature extraction model obtained through unsupervised training and a segmentation model obtained through supervised training, the feature extraction model is used for extracting features of the images to be recognized, and the segmentation model is used for carrying out example segmentation according to the features of the images to be recognized to recognize lane lines.
2. The method according to claim 1, wherein the lane line recognition result indicates that the image to be recognized includes at least two lane lines; and
the method further comprises the following steps:
and selecting two adjacent lane lines from the at least two lane lines as a group, and generating a communication area formed by each group of lane lines as a lane area.
3. The method of claim 2, further comprising:
acquiring a target detection result corresponding to the lane area and aiming at a preset target;
and determining the road condition information of the lane area according to the target detection result.
4. The method of claim 2, wherein the generating a connected region of each set of lane lines as a lane region comprises:
for each group of lane lines, respectively extending two lane lines in the group of lane lines so that the two lane lines are intersected at one point and are respectively intersected with the boundary of the image to be identified;
and determining a communication area formed by each group of lane lines and the corresponding intersection points as a lane area.
5. The method according to one of claims 1 to 4, wherein the feature extraction model adopts an unsupervised training method which takes a residual error network as a backbone network and adopts momentum contrast.
6. The method of claim 5, wherein the segmentation model adopts a feature extraction result output by the trained feature extraction model as training data.
7. An apparatus for identifying a lane line, comprising:
an image acquisition module configured to acquire an image to be recognized;
the lane line recognition module is configured to input the image to be recognized into a pre-trained lane line recognition model to obtain a lane line recognition result, wherein the lane line recognition model comprises a feature extraction model obtained through unsupervised training and a segmentation model obtained through supervised training, the feature extraction model is used for extracting features of the image to be recognized, and the segmentation model is used for performing example segmentation according to the features of the image to be recognized to recognize a lane line.
8. The apparatus according to claim 7, wherein the lane line recognition result indicates that the image to be recognized includes at least two lane lines; and
the device further comprises:
and the lane area identification module is configured to select two adjacent lane lines from the at least two lane lines as a group, and generate a connected area formed by each group of lane lines as a lane area.
9. The apparatus of claim 8, further comprising:
the detection result acquisition module is configured to acquire a target detection result corresponding to the lane area and aiming at a preset target;
a determining module configured to determine road condition information of the lane area according to the target detection result.
10. The apparatus of claim 8, wherein the lane area identification module is further configured to:
for each group of lane lines, respectively extending two lane lines in the group of lane lines so that the two lane lines are intersected at one point and are respectively intersected with the boundary of the image to be identified;
and determining a communication area formed by each group of lane lines and the corresponding intersection points as a lane area.
11. The apparatus according to one of claims 7 to 10, wherein the feature extraction model adopts an unsupervised training method with a residual network as a backbone network and momentum contrast.
12. The apparatus of claim 11, wherein the segmentation model adopts a feature extraction result output by a trained feature extraction model as training data.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-6.
16. An unmanned vehicle, comprising a camera shooting acquisition module and the electronic device of claim 13, wherein the camera shooting acquisition module is used for acquiring a driving environment image as an image to be recognized in the driving process of the unmanned vehicle, and sending the image to be recognized to the electronic device.
CN202110718085.7A 2021-06-28 2021-06-28 Method, device, equipment, storage medium and unmanned vehicle for identifying lane line Pending CN113392793A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110718085.7A CN113392793A (en) 2021-06-28 2021-06-28 Method, device, equipment, storage medium and unmanned vehicle for identifying lane line

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110718085.7A CN113392793A (en) 2021-06-28 2021-06-28 Method, device, equipment, storage medium and unmanned vehicle for identifying lane line

Publications (1)

Publication Number Publication Date
CN113392793A true CN113392793A (en) 2021-09-14

Family

ID=77624269

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110718085.7A Pending CN113392793A (en) 2021-06-28 2021-06-28 Method, device, equipment, storage medium and unmanned vehicle for identifying lane line

Country Status (1)

Country Link
CN (1) CN113392793A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807236A (en) * 2021-09-15 2021-12-17 北京百度网讯科技有限公司 Method, apparatus, device, storage medium and program product for lane line detection
CN114022865A (en) * 2021-10-29 2022-02-08 北京百度网讯科技有限公司 Image processing method, apparatus, device and medium based on lane line recognition model
CN114463707A (en) * 2022-02-10 2022-05-10 中国电信股份有限公司 Vehicle weight recognition method and device, storage medium and electronic equipment
CN115471803A (en) * 2022-08-31 2022-12-13 北京四维远见信息技术有限公司 Method, device and equipment for extracting traffic identification line and readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107665327A (en) * 2016-07-29 2018-02-06 高德软件有限公司 A kind of method for detecting lane lines and device
WO2020052668A1 (en) * 2018-09-15 2020-03-19 北京市商汤科技开发有限公司 Image processing method, electronic device, and storage medium
CN111259704A (en) * 2018-12-03 2020-06-09 初速度(苏州)科技有限公司 Training method of dotted lane line endpoint detection model
CN111460921A (en) * 2020-03-13 2020-07-28 华南理工大学 Lane line detection method based on multitask semantic segmentation
CN111563463A (en) * 2020-05-11 2020-08-21 上海眼控科技股份有限公司 Method and device for identifying road lane lines, electronic equipment and storage medium
WO2020261183A1 (en) * 2019-06-25 2020-12-30 Owkin Inc. Systems and methods for image preprocessing
CN112464879A (en) * 2020-12-10 2021-03-09 山东易视智能科技有限公司 Ocean target detection method and system based on self-supervision characterization learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107665327A (en) * 2016-07-29 2018-02-06 高德软件有限公司 A kind of method for detecting lane lines and device
WO2020052668A1 (en) * 2018-09-15 2020-03-19 北京市商汤科技开发有限公司 Image processing method, electronic device, and storage medium
CN111259704A (en) * 2018-12-03 2020-06-09 初速度(苏州)科技有限公司 Training method of dotted lane line endpoint detection model
WO2020261183A1 (en) * 2019-06-25 2020-12-30 Owkin Inc. Systems and methods for image preprocessing
CN111460921A (en) * 2020-03-13 2020-07-28 华南理工大学 Lane line detection method based on multitask semantic segmentation
CN111563463A (en) * 2020-05-11 2020-08-21 上海眼控科技股份有限公司 Method and device for identifying road lane lines, electronic equipment and storage medium
CN112464879A (en) * 2020-12-10 2021-03-09 山东易视智能科技有限公司 Ocean target detection method and system based on self-supervision characterization learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孙红;李松;李民赞;刘豪杰;乔浪;张瑶;: "农业信息成像感知与深度学习应用研究进展", 农业机械学报, no. 05, 25 May 2020 (2020-05-25) *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807236A (en) * 2021-09-15 2021-12-17 北京百度网讯科技有限公司 Method, apparatus, device, storage medium and program product for lane line detection
CN113807236B (en) * 2021-09-15 2024-05-17 北京百度网讯科技有限公司 Method, device, equipment, storage medium and program product for lane line detection
CN114022865A (en) * 2021-10-29 2022-02-08 北京百度网讯科技有限公司 Image processing method, apparatus, device and medium based on lane line recognition model
CN114463707A (en) * 2022-02-10 2022-05-10 中国电信股份有限公司 Vehicle weight recognition method and device, storage medium and electronic equipment
CN115471803A (en) * 2022-08-31 2022-12-13 北京四维远见信息技术有限公司 Method, device and equipment for extracting traffic identification line and readable storage medium
CN115471803B (en) * 2022-08-31 2024-01-26 北京四维远见信息技术有限公司 Extraction method, device and equipment of traffic identification line and readable storage medium

Similar Documents

Publication Publication Date Title
CN109598066B (en) Effect evaluation method, apparatus, device and storage medium for prediction module
CN113392793A (en) Method, device, equipment, storage medium and unmanned vehicle for identifying lane line
US20210302585A1 (en) Smart navigation method and system based on topological map
KR20230026961A (en) Method and apparatus for predicting motion track of obstacle and autonomous vehicle
CN114415628A (en) Automatic driving test method and device, electronic equipment and storage medium
CN110717918B (en) Pedestrian detection method and device
CN113378693B (en) Method and device for generating target detection system and detecting target
CN115860102B (en) Pre-training method, device, equipment and medium for automatic driving perception model
CN113011323A (en) Method for acquiring traffic state, related device, road side equipment and cloud control platform
CN114610628A (en) Scene library establishing and testing method, device, equipment, medium and program product
CN113467875A (en) Training method, prediction method, device, electronic equipment and automatic driving vehicle
CN115761702A (en) Vehicle track generation method and device, electronic equipment and computer readable medium
CN116092055A (en) Training method, acquisition method, device, equipment and automatic driving vehicle
CN114715145A (en) Trajectory prediction method, device and equipment and automatic driving vehicle
CN113119999A (en) Method, apparatus, device, medium, and program product for determining automatic driving characteristics
CN113052047A (en) Traffic incident detection method, road side equipment, cloud control platform and system
CN112509321A (en) Unmanned aerial vehicle-based driving control method and system for urban complex traffic situation and readable storage medium
CN114998863B (en) Target road identification method, device, electronic equipment and storage medium
CN114549961B (en) Target object detection method, device, equipment and storage medium
CN115995075A (en) Vehicle self-adaptive navigation method and device, electronic equipment and storage medium
CN113344121B (en) Method for training a sign classification model and sign classification
CN114495049A (en) Method and device for identifying lane line
CN114708498A (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN116168366B (en) Point cloud data generation method, model training method, target detection method and device
CN115431968B (en) Vehicle controller, vehicle and vehicle control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination