CN115797899A - Lane line detection method and device - Google Patents

Lane line detection method and device Download PDF

Info

Publication number
CN115797899A
CN115797899A CN202111055992.4A CN202111055992A CN115797899A CN 115797899 A CN115797899 A CN 115797899A CN 202111055992 A CN202111055992 A CN 202111055992A CN 115797899 A CN115797899 A CN 115797899A
Authority
CN
China
Prior art keywords
lane line
lane
feature map
feature point
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111055992.4A
Other languages
Chinese (zh)
Inventor
鲁恒宇
苏鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202111055992.4A priority Critical patent/CN115797899A/en
Priority to PCT/CN2022/116161 priority patent/WO2023036032A1/en
Publication of CN115797899A publication Critical patent/CN115797899A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the application discloses a lane line detection method and device, and relates to the technical field of automatic driving. The method comprises the following steps: acquiring a characteristic map of a first image; determining target feature points in the feature map; and determining a first topological relation according to the target feature point, wherein the target feature point is associated with the position where the first topological relation changes, and the first topological relation is used for indicating the association relation between lane lines in the first image. The method determines the topological relation of the lane lines by identifying the target feature points in the feature map, and is beneficial to improving the detection efficiency of the topological relation of the lane lines.

Description

Lane line detection method and device
Technical Field
The embodiment of the application relates to the technical field of automatic driving, in particular to a lane line detection method and device.
Background
Lane line detection is an important task of an Automatic Driver Assistance System (ADAS), and is a key technology for realizing Adaptive Cruise Control (ACC), lane Deviation Warning System (LDWS), and the like. Lane line detection is a complex and challenging issue in research for smart vehicles or unmanned vehicles. The lane line is used as a main part of a road and plays a role in providing reference for the unmanned vehicle and guiding safe driving. Meanwhile, lane line detection can be further used for realizing road positioning, determining the relative position between a vehicle and a road and assisting decision planning of the vehicle.
Currently, many methods for detecting lane lines based on conventional image processing have been available, which can achieve good effects on roads with clear scenes and easy recognition, but in some designs, as shown in fig. 1, in scenes in which high-precision maps are used, points on each lane line are usually recognized from a single-frame picture, and then the points on the lane lines are projected into an absolute coordinate system, and then the association relationship of the lane lines is established by using post-processing algorithms such as cluster fitting, so as to obtain a lane topological graph in the high-precision maps. The method has the problems of low efficiency, low general adaptability, strong dependence on other data and the like. Moreover, with the progress of research, the scenes corresponding to the lane line detection tasks are more and more diversified, and how to improve the lane line detection effect under the complex scenes is still a difficult problem.
Disclosure of Invention
The embodiment of the application provides a lane line detection method and device, which are beneficial to improving the lane line detection efficiency.
In a first aspect, an embodiment of the present application provides a lane line detection method, which may be used in a lane line detection device, where the lane line detection device may be deployed on a vehicle side or a server side, may be an independent device, may also be a chip or a component in a device, and may also be a software module.
The method comprises the following steps: acquiring a feature map of a first image; determining target feature points in the feature map; and determining a first topological relation according to the target feature point, wherein the target feature point is associated with the position where the first topological relation changes, and the first topological relation is used for indicating the association relation between lane lines in the first image.
By the method, a complex lane line detection scene can be converted into a simple scene according to the predefined target feature points, so that the association relation between the lane lines in the first image is determined.
With reference to the first aspect, in a possible implementation manner, the determining a target feature point in the feature map includes: calculating the confidence degree of each feature point in the feature map as the target feature point; and determining the target characteristic point in the characteristic diagram according to the confidence degree.
By the above method, for example, a target feature point can be determined among a plurality of feature points of the feature map based on a target detection algorithm and the confidence.
With reference to the first aspect, in a possible implementation manner, the determining a first topological relation according to the target feature point includes: according to the position of the target feature point in the feature map, carrying out slice division on the feature map to obtain at least two feature map slices; and determining the first topological relation according to the codes of the lane lines in the at least two characteristic fragments.
By the method, the feature map is divided into at least two feature map segments according to the target feature point, so that lane lines in the at least two feature map segments are detected respectively.
With reference to the first aspect, in a possible implementation manner, the method further includes: and adjusting the codes of the lane line where the target feature point is located and/or the adjacent lane line according to the position associated with the target feature point.
By the method, the image sequence or the lane lines in at least two feature image fragments belonging to the same image are subjected to coding matching according to the code of each lane line, so that parameters introduced by an algorithm are reduced, and the robustness of a lane line detection algorithm is improved.
With reference to the first aspect, in a possible implementation manner, the target feature point is associated with any one of the following positions: a lane line stop position, a lane line bifurcation position, or a lane line merge position.
By the method, the position points influencing the lane topological relation change can be predefined according to the conversion relation of the lane topological relation. It should be understood that the present disclosure is only illustrative and not restrictive of several possible locations, and in other embodiments, other locations are possible and will not be described herein.
With reference to the first aspect, in a possible implementation manner, the first image belongs to a group of image sequences, and the method further includes: and determining a second topological relation according to the codes of the lane lines in the plurality of images in the image sequence, wherein the second topological relation is used for indicating the association relation between the lane lines in the image sequence.
By the method, the lane line detection device can determine the topological relation among the lane lines in different images according to a group of image sequences, and the detection efficiency of the lane line topological relation is improved.
With reference to the first aspect, in a possible implementation manner, the method further includes: and determining a similarity matrix according to the feature map, wherein the similarity matrix is used for indicating the global incidence relation of each feature point in the feature map.
By the method, the lane line detection device can learn the global topological relation among the characteristic points in the characteristic diagram of one frame of image so as to enhance the association relation among the characteristic points.
In a second aspect, an embodiment of the present application provides a lane line detection apparatus, including: the acquiring unit is used for acquiring a characteristic map of the first image; a first determining unit, configured to determine a target feature point in the feature map; a second determining unit, configured to determine a first topological relation according to the target feature point, where the target feature point is associated with a position where the first topological relation changes, and the first topological relation is used to indicate an association relation between lane lines in the first image.
With reference to the second aspect, in a possible implementation manner, the first determining unit is configured to: calculating the confidence degree of each feature point in the feature map as the target feature point; and determining the target feature point in the feature map according to the confidence.
With reference to the second aspect, in a possible implementation manner, the second determining unit is configured to: according to the position of the target feature point in the feature map, carrying out slice division on the feature map to obtain at least two feature map slices; and determining the first topological relation according to the codes of the lane lines in the at least two characteristic fragments.
With reference to the second aspect, in a possible implementation manner, the apparatus further includes: and the adjusting unit is used for adjusting the codes of the lane line where the target feature point is located and/or the adjacent lane line according to the position associated with the target feature point.
With reference to the second aspect, in a possible implementation manner, the target feature point is associated with any one of the following positions: a lane line stop position, a lane line bifurcation position, or a lane line merge position.
With reference to the second aspect, in a possible implementation manner, the first image belongs to a group of image sequences, and the apparatus further includes: a third determining unit, configured to determine a second topological relation according to codes of lane lines in the multiple images in the image sequence, where the second topological relation is used to indicate an association relation between the lane lines in the image sequence.
With reference to the second aspect, in a possible implementation manner, the apparatus further includes: and the fourth determining unit is used for determining a similar matrix according to the feature map, wherein the similar matrix is used for indicating the global incidence relation of each feature point in the feature map. It should be noted that the first determining unit, the second determining unit, the third determining unit, or the fourth determining unit may be different processors or may be the same processor, which is not limited in this embodiment of the present application.
In a third aspect, an embodiment of the present application provides a lane line detection apparatus, including: a processor and a memory; the memory is used for storing programs; the processor is configured to execute the program stored in the memory to cause the apparatus to implement the method as described above in the first aspect and any one of the possible designs of the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, in which program code is stored, and when the program code runs on a computer, the program code causes the computer to execute the method according to the first aspect and the possible design of the first aspect.
In a fifth aspect, the present application provides a computer program product, which when run on a computer, causes the computer to execute the method according to the first aspect and possible designs of the first aspect.
In a sixth aspect, an embodiment of the present application provides a chip system, where the chip system includes a processor, and is configured to call a computer program or computer instructions stored in a memory, so as to cause the processor to execute the method according to the first aspect and possible designs of the first aspect.
With reference to the sixth aspect, in one possible implementation manner, the processor may be coupled with the memory through an interface.
With reference to the sixth aspect, in a possible implementation manner, the chip system may further include a memory, where the computer program or the computer instructions are stored in the memory.
In a seventh aspect, an embodiment of the present application provides a processor, where the processor is configured to call a computer program or computer instructions stored in a memory, so as to cause the processor to perform the method described in the first aspect and possible designs of the first aspect.
The embodiments of the present application may be further combined to provide more implementations on the basis of the implementations provided by the above aspects.
Technical effects that can be achieved by any one of the above-mentioned possible designs of the second aspect to the seventh aspect can be described with reference to the technical effects that can be achieved by any one of the above-mentioned possible designs of the first aspect, and repetition points are not discussed.
Drawings
FIG. 1 is an example of a lane marking detection method;
FIG. 2 is a schematic diagram illustrating an application scenario in which embodiments of the present application are applicable;
FIG. 3 shows a schematic view of a vehicle sensing system of an embodiment of the present application;
fig. 4 is a schematic view showing a principle of a lane line detecting apparatus according to an embodiment of the present application;
5 a-5 c show schematic diagrams of locations associated with target feature points of embodiments of the application;
FIG. 6 illustrates a schematic diagram of a target detection module of an embodiment of the present application;
FIG. 7 illustrates a schematic diagram of a feature segmentation module of an embodiment of the present application;
FIG. 8 is a schematic diagram of a lane marking detection module according to an embodiment of the present application;
fig. 9 is a schematic flowchart illustrating a lane line detection method according to an embodiment of the present application;
10 a-10 c show schematic diagrams of lane line coding of an embodiment of the present application;
FIG. 11 is a diagram illustrating global relationship detection according to an embodiment of the present application;
12 a-12 b show schematic views of a display mode of an embodiment of the present application;
fig. 13 is a schematic diagram showing a lane line detection method according to an embodiment of the present application;
fig. 14 is a schematic diagram illustrating a lane line detection method according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides a method and a device for detecting a lane line, wherein a first topological relation is determined by identifying a target feature point in a feature map of a first image, and the method and the device are beneficial to improving the detection efficiency of the lane line. The method and the device are based on the same technical conception, and because the principles of solving the problems of the method and the device are similar, the implementation of the device and the method can be mutually referred, and repeated parts are not repeated.
It should be noted that the lane line detection scheme in the embodiment of the present application may be applied to a vehicle networking, such as vehicle to all (V2X), long term evolution-vehicle (LTE-V) for vehicle-to-vehicle (vehicle to vehicle), and the like. For example, it may be applied to a vehicle having a driving movement function, or other devices having a driving movement function in a vehicle. Such other devices include, but are not limited to: the lane line detection method comprises a vehicle-mounted terminal, a vehicle-mounted controller, a vehicle-mounted module, a vehicle-mounted component, a vehicle-mounted chip, a vehicle-mounted unit, a vehicle-mounted radar or a vehicle-mounted camera and other sensors, and a vehicle can implement the lane line detection method provided by the embodiment of the application through the other sensors. Of course, the lane line detection scheme in the embodiment of the present application may also be used in other intelligent terminals with a motion control function than a vehicle, or be provided in a component of the intelligent terminal. The intelligent terminal can be intelligent transportation equipment, intelligent household equipment, a robot and the like. Such as including but not limited to a smart terminal or other sensor such as a controller, chip, radar or camera within the smart terminal, and other components.
In the embodiments of the present application, "at least one" means one or more, "and" a plurality "means two or more. "and/or" describes the association relationship of the associated object, indicating that there may be three relationships, for example, a and/or B, which may indicate: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a and b, a and c, b and c, or a and b and c, wherein a, b and c can be single or multiple.
And, unless specifically stated otherwise, the embodiments of the present application refer to the ordinal numbers "first", "second", etc., for distinguishing between a plurality of objects, and do not limit the priority or importance of the plurality of objects. For example, the first topological relation and the second topological relation are only used for distinguishing different topological relations, and do not represent the difference of the priority or importance of the two topological relations.
The following describes an application scenario applicable to the embodiments of the present application with reference to the drawings and the embodiments.
Fig. 2 shows a schematic diagram of an application scenario to which the embodiment of the present application is applied. Referring to fig. 2, the application scenario may include a vehicle and a server, where the server may be a cloud, and the cloud may include a cloud server and/or a cloud virtual machine. The server may communicate with the vehicle to provide a variety of services to the vehicle, such as Over The Air (OTA) services, high-precision mapping services, autonomous driving or assisted driving services, and so on.
The vehicle can download high-precision map data from the cloud to obtain a high-precision map, and more accurate navigation service is provided for a user. The service can not only update the road information into the map more timely, but also reduce the requirement of the vehicle local on the storage space. For example, for a large city or an area, the data volume of the whole set of high-precision map is large, the high-precision map service provided by the cloud end enables the vehicle to obtain the high-precision map of the area with the current position and a small range in real time when the vehicle is running, and the high-precision map of the area can be released from the vehicle when the high-precision map is not needed.
The vehicle can be interactive with the high in the clouds to promote autopilot or driver assistance function, thereby promote the security and the trip efficiency of vehicle. For example, the vehicle can collect road information and surrounding vehicle information through a sensing device installed on the vehicle body, and upload the collected information to the cloud, the cloud carries out training of driving algorithms in different scenes based on the collected information, and continuously optimizes the driving algorithms along with updating of training data, and updates the vehicle, so that the automatic driving capability of the vehicle for coping with various scenes is continuously improved. For another example, for the neural network-based image processing algorithm used by the sensing device, the training of the image processing algorithm may be completed in the cloud and updated as the training data is updated; accordingly, the vehicle can acquire the updated image processing algorithm from the cloud, so that the image processing capability of the sensing device can be improved. For another example, in severe weather, the vehicle can acquire weather information and road traffic accident information through the cloud, so that the vehicle is assisted to plan, the travel efficiency is improved, and the risk of vehicle accidents is reduced. Or the cloud end can send real-time road information such as traffic light information to the vehicle, so that the vehicle can receive the traffic light change interval time of the front intersection in advance, the time for the vehicle to pass is calculated according to the current vehicle speed, the proper and safe passing time is judged, the running speed of the vehicle is planned, and therefore the energy consumption of the vehicle can be reduced, and the driving safety can be improved.
In addition, the vehicle can obtain the service of third party through the high in the clouds, for example under the circumstances that the driver authorizes, the courier can open the trunk of vehicle through disposable digital authorization, places article in the car to realize that the driver is not receiving the express delivery under the circumstances on the spot.
The vehicle may exchange information with the cloud by way of wireless communication, and the wireless communication may follow a wireless protocol of a network to which the vehicle is accessed, for example, V2X (C-V2X) communication of a cellular network, such as a Long Term Evolution (LTE) wireless network or a fifth generation (5 g) wireless network, and the like.
The application scene also can include a Road Side Unit (RSU), the RSU can be installed at the road side and can communicate with the cloud and the vehicle, the RSU communicating with the cloud can be regarded as a terminal device similar to the vehicle, and the RSU communicating with the vehicle can be regarded as a terminal device similar to the vehicle and can also be regarded as a server device of the vehicle. The roadside unit may interact with the vehicle or the cloud in a wireless communication manner, and may communicate with the vehicle in a Dedicated Short Range Communication (DSRC) technology, or in a V2X (C-V2X) communication based on a cellular network, for example, based on an LTE communication protocol or based on a 5G communication protocol. The communication with the cloud end may employ cellular network-based V2X (C-V2X) communication, for example, based on an LTE communication protocol or based on a 5G communication protocol. The road side unit may provide services to the vehicle, such as enabling vehicle identification, electronic toll collection, electronic credit, and the like. The road side unit can be provided with a sensing device to realize the collection of road information and further provide the vehicle-road cooperative service. The road side unit can be connected with road side traffic boards (such as electronic traffic lights or electronic speed limit boards) to realize real-time control of the traffic lights or the speed limit boards, or road information can be directly provided for vehicles through a cloud end to improve the automatic driving or auxiliary driving function.
As described above, lane line detection is an important task of an Automatic Driver Assistance System (ADAS), and is a key technology for realizing Adaptive Cruise Control (ACC), lane Deviation Warning System (LDWS), and the like. Lane line detection is a complex and challenging issue in research for smart vehicles or unmanned vehicles. The lane line is used as a main part of a road and plays a role in providing reference for the unmanned vehicle and guiding safe driving. Meanwhile, the lane line detection can be further used for realizing road positioning, determining the relative position between the vehicle and the road and assisting the decision planning of the vehicle.
In the embodiment of the present application, as shown in fig. 3, a vehicle may be equipped with multiple sensors, for example, one or more of a camera device, a laser radar, a millimeter wave radar, an ultrasonic sensor, and the like, so as to acquire environmental information around the vehicle through the sensors, analyze and process the acquired information, and implement functions such as obstacle sensing, target recognition, vehicle positioning, path planning, driver monitoring/reminding, and the like, thereby improving safety, automation degree, and comfort level of vehicle driving. The vehicle carries out comprehensive analysis according to perception information obtained by various sensors, and can also determine which lane of the current road the vehicle is in, topological relations among lane lines on the road and the like, so that the automatic driving or auxiliary driving function of the vehicle is improved according to a road topological graph.
The camera device is used for acquiring image information of an environment where the vehicle is located, and a plurality of cameras can be installed on the vehicle at present so as to acquire information from more angles. The laser radar is a short for a laser Detection and Ranging (LiDAR) system, and mainly comprises a transmitter, a receiver and a signal processing unit, wherein the transmitter is a laser transmitting mechanism in the laser radar; after laser emitted by the emitter irradiates a target object, the laser is reflected by the target object, and reflected light rays are converged on the receiver through the lens group. The signal processing unit is responsible for controlling the emission of the emitter, processing the signal received by the receiver and calculating the information such as the position, the speed, the distance and/or the size of the target object.
The millimeter-wave radar uses millimeter waves as a detection medium, and can measure the distance, angle, relative speed, and the like from the millimeter-wave radar to a measured object. Millimeter wave radars may be classified into Long Range Radars (LRR), mid-Range radars (MRR), and Short Range Radars (SRR) according to the distance of detection. The LRR mainly faces application scenes including active cruising, braking assistance and the like, has low requirements on the detection angular domain width, and reflects that the requirements on the antenna on the 3dB beam width of the antenna are low. The MRR/SRR mainly aims at application scenes including automatic parking, lane merging assistance, blind spot detection and the like, has high requirements on the width of a detected angular domain, reflects that the antenna has high requirements on the 3dB wave beam width of the antenna, and requires the antenna to have low side lobe level. The beam width is used for ensuring the detectable angular domain range, the low side lobe is used for reducing clutter energy reflected by the ground, the false alarm probability is reduced, and the driving safety is ensured. The LRR can be arranged in front of the vehicle body, the MRR/SRR can be arranged at four corners of the vehicle, and the coverage of 360 ranges around the vehicle body can be realized by using the MRR/SRR together.
The millimeter wave radar may include a housing, at least one Printed Circuit Board (PCB) is disposed in the housing, and may include, for example, a power supply PCB and a radar PCB, where the power supply PCB may provide voltage for use inside the radar, and may also provide an interface for communication with other devices and a security function; the radar PCB may provide transceiving and processing of millimeter wave signals, on which components for millimeter wave signal processing and antennas (a transmitting antenna Tx and a receiving antenna Rx) for transceiving of millimeter wave signals are integrated. The antenna may be formed in a microstrip array on the rear surface of the radar PCB for transmitting and receiving millimeter waves.
An ultrasonic sensor, which may be called an ultrasonic radar, is a sensing device that uses ultrasonic detection, and its operating principle is to emit ultrasonic waves outward through an ultrasonic emitting device, receive the ultrasonic waves reflected by an obstacle through a receiving device, and measure and calculate a distance according to a time difference between the ultrasonic waves reflected and received. At present, the distance measured by the ultrasonic sensor can be used for prompting the distance from a vehicle body to an obstacle, assisting in parking or reducing unnecessary collision. It should be understood that the above sensors are merely illustrative of the sensors that may be configured on the vehicle in the embodiments of the present application and are not intended to be limiting, and in other embodiments, the sensors may include, but are not limited to, the above examples.
In the embodiment of the application, the lane line detection device may be an application program, and may be installed or operated in a chip or a component of a vehicle, or on an intelligent device such as a mobile phone and a tablet computer on the vehicle. Alternatively, the lane line detection means may be a software module, and may be disposed in any Electronic Control Unit (ECU) of the vehicle. Or, the lane line detection device may be a hardware module newly added in the vehicle, and the hardware module may be configured with related judgment logic or algorithm, and may be used as an ECU in the vehicle, and perform information transmission with other ECUs or various sensors through an automobile bus to realize lane line detection.
In practice, the lane line detection device may acquire a single frame image or a set of image sequences to be processed from the camera, for example. The lane line detection device may acquire a feature map of a first image, determine a target feature point in the feature map, and determine a first topological relation according to the target feature point, where the target feature point is associated with a position where the first topological relation changes, where the first topological relation is used to indicate an association relation between lane lines in the first image. The lane line detection device can divide the feature map into at least two feature map fragments according to the target feature points, so that the first topological relation is determined according to the at least two feature map fragments, a complex lane line detection scene can be converted into a simple scene, and lane line detection efficiency is improved.
For a group of image sequences (the image sequence includes a plurality of continuously acquired images, and the first image belongs to the group of image sequences), the lane line detection apparatus may determine a second topological relation according to a code of a lane line in the plurality of images in the image sequence, where the second topological relation is used to indicate an association relation between lane lines in the image sequence. Therefore, the lane topological relation can be obtained only by depending on the image sequence, and the robustness of the lane line detection method is improved due to the fact that parameters which can be introduced in the detection process are reduced, and errors caused by projection and other intermediate processes are reduced.
In one possible implementation, as shown in fig. 4, the lane line detection apparatus may be configured with a lane line detection network and a code matching module, and the lane line detection network may include at least one of the following: the system comprises a neural network (Backbone) module, a target detection module (Point proxy Head), a feature segmentation module, a feature fusion module, a Lane line detection module (Lane Head) and a global relationship detection module.
The neural network module can learn local features and global topological features in one frame of image according to an input single frame of image or an image sequence, and generate a feature map of the frame of image. The target detection module can be used for determining target feature points in the feature map so as to determine the position of the change of the topological relation of the lane lines in the frame image. The feature segmentation module may perform slice division on the feature map according to the position of the target feature point in the feature map to obtain at least two feature map segments, and determine an association relationship, that is, a local relationship (location) between feature points of the at least two feature map segments after parsing. The global relationship detection module may be configured to output a global Similarity Matrix (Similarity Matrix) for the complete feature map to indicate global relationships (i.e., global relationships) of the feature points in the feature map, and enhance the relationships between lane lines in one frame of image. The fusion module can perform feature fusion (fusion) on the feature map or at least two feature map segments of the feature map according to the local relationship and the global relationship, and input a feature fusion result into the lane line detection module. The lane line detection module may be configured to detect a lane line in the feature map or the at least two feature patches. The code matching module can be used for carrying out code matching on the lane lines in at least two feature image slices belonging to the same frame of image or carrying out code matching on the lane lines in a plurality of images in a group of image sequences.
In the embodiment of the present application, the lane line detection apparatus can output the following result for one frame of image (denoted as a first image): the image segmentation method comprises the following steps of a first topological relation, lane line positions and lane line codes in each feature map segment, a similarity matrix (the similarity matrix is used for indicating the global incidence relation of all feature points in the feature map), and a second topological relation of a group of image sequences to which a first image belongs, and the second topological relation is used for indicating the incidence relation among lane lines in the image sequences. The above result can be provided to the aforementioned ACC, LDWS, etc. system, so that the ACC, LDWS, etc. system promotes the automatic driving or driving assistance function of the vehicle according to the first topological relation and/or the second topological relation.
It should be noted that, in fig. 4, only the functional description of the lane line detection device is presented, but not limited thereto, in other embodiments, the lane line detection device may further include other functional modules, or the functional modules of the lane line detection device may have other names, which is not limited in the embodiments of the present application.
For convenience of understanding, before describing the lane line detection method according to the embodiment of the present application, the target feature point and each functional block of the lane line detection device according to the embodiment of the present application will be explained first.
1. Target feature point
In the embodiment of the present application, the target feature point is associated with a position when the topological relation of the lane line changes, and the position may also be referred to as a key position. For the convenience of distinguishing, the association relationship between the lane lines in the first image is represented by a first topological relationship, and the target feature point is associated with the position where the first topological relationship changes.
For example, the target feature point may be associated with any one of the following locations: lane stop (stop) position, lane split (split) position, or lane merge (merge) position.
As shown in fig. 5a, two parallel lanes, namely, a lane a and a lane B, exist on the same road, and the lane a and the lane B converge to a lane C in front due to the change of the lane topology, so that a lane line ab originally located between the lane a and the lane B ends at a position point C, and the topological relation of the lane line is changed, where the position point C is a stop position of the lane line ab.
As shown in fig. 5b, when a lane D exists on the same road, and the lane D branches into two lanes, i.e., a lane E and a lane F, in the front and the front right due to a change in the lane topology, the right lane line D0 of the lane D branches into a right lane line ed of the lane E and a left lane line df of the lane F at a position point D, which is a stop position of the lane line D0, and the lane line topology is changed.
As shown in fig. 5c, the lanes G and the lanes H on the two roads are converged into a lane I due to the change of the road topology, so that the original left lane line G0 of the lane G and the original right lane line H0 of the lane H are converged at a position point G, which is the merging position of the lane line G0 and the lane line H0, and merged into the left lane line I0 of the lane I, and the topological relationship of the lane lines changes.
When designing the neural network model and the target detection model, a target feature point and a training model may be defined according to three positions shown in fig. 5a to 5c, so as to perform feature detection on a first image using the neural network model obtained through training, and determine a target feature point in a corresponding feature map using the target detection model obtained through training, so as to identify a position where a lane line topological relation represented by the target feature point changes, so as to obtain an association relation between lane lines in the first image, that is, a first topological relation.
It should be noted that the three positions shown in fig. 5a to fig. 5c are exemplary illustrations of the predefined change positions of the lane line topological relation in the embodiment of the present application and are not limited at all, and in other embodiments, the target feature point may be defined according to business needs, scene needs, real road topological relation, or the like, which is not limited in the embodiment of the present application.
2. Neural network module
As shown in fig. 4, the Neural network (Backbone) module may include a Convolutional Neural Network (CNN) or a transform Neural network (Transformer) model. The input of the neural network module can be a single frame image or an image sequence. Wherein, for an image sequence, the image sequence may include a plurality of continuously acquired images, the sequence direction of the image sequence (i.e. the transformation direction of the plurality of images) is the same as the vehicle advancing direction, and the neural network model may process one frame of image in the image sequence at a time.
For the sake of distinction, in this embodiment, a single frame of image or one frame of image in the image sequence that needs to be processed currently, which is input to the neural network model, is referred to as a first image. For the first image, the neural network module may perform feature extraction on the first image to obtain a feature map (feature map) of the first image. The lane line detection apparatus may further perform a subsequent step of lane line detection based on the feature map with the feature map as an intermediate result to output the following result corresponding to the first image: the image segmentation method comprises the following steps of a first topological relation, lane line positions and lane line codes in each feature map segment, a similarity matrix (the similarity matrix is used for indicating the global incidence relation of various feature points in the feature map), and a second topological relation of a group of image sequences to which a first image belongs.
3. Target detection module (Point Proposal Head)
The target detection module may be configured to calculate a confidence level that each feature point in the feature map of the first image is a target feature point, and determine the target feature point in the feature map according to the confidence level. In each of the expressions mentioned below, the meaning of the parameters is shown in the following table 1:
TABLE 1
Figure BDA0003254672730000091
As shown in fig. 6, the target detection model may use an N × 1 Confidence map (N is the total number of cells in the feature map, and N is an integer greater than or equal to this), obtain Confidence that the feature points in the feature map are the target feature points, and filter out, through masking (masking), the feature points with higher Confidence (for example, the Confidence is greater than or equal to a first threshold) (that is, the feature points with the Confidence lower than the first threshold are considered as the background) as the target feature points.
The confidence loss function of the feature point can be shown in the following expressions (1) and (2), for example:
Figure BDA0003254672730000092
Figure BDA0003254672730000093
wherein L is exist A loss function representing the existence of loss correspondence, wherein the function can be applied to a cell containing a target feature point in the feature map; l is a radical of an alcohol none exist A loss function indicating that there is no loss correspondence, which can be used to reduce the confidence value for each context in the feature map. The confidence value of a feature point in the feature map may be approximately 1 if the target feature point exists at a feature point position, and 0 if the target feature point does not exist at a feature point position. Gn is a cell in which a lane line exists.
The target detection module may also obtain the location of the feature point of each output cell in the UV coordinate system through a position-loss function adjustment (fine-tune) of the feature point. In this embodiment, the UV coordinate system may use the upper left corner of the picture (including the first image, the feature map, any feature map slice, and the like) as an origin, the horizontal direction is a U coordinate, the vertical direction is a V coordinate, and (U, V) are coordinates of feature points in the picture.
As an example, the position loss function may calculate the deviation from the true position using a two-norm, as shown in the following expression (3):
Figure BDA0003254672730000094
4. feature segmentation module
In this embodiment of the application, the feature segmentation module may segment the feature map into at least two feature map segments along the transverse direction (perpendicular to the vehicle traveling direction) via identity transformation according to the transverse position of the target feature point in the feature map, as shown in fig. 7. The feature segmentation module may also subject the at least two feature map slices to a mapping (e.g., ROI Align) process to unify the output size of each feature map slice. The identity transformation may be, for example, an introduced residual network (equivalent network) to transfer information of the predicted target feature point to a suitable position of the feature map, so as to ensure that the feature map can be correctly divided.
5. Global (globel) relationship detection module
In the embodiment of the application, the global relationship detection module can learn the relationship of the lane line position points through multiple points to multiple points so as to enhance the global relationship characteristic of the lane line.
For example, the global relationship detection module may use a similarity matrix to describe the global relationship of lane lines, and the position points on the same lane line may uniformly use the same element value. For example, when the position points on the lanes belong to the same lane line, the corresponding element in the similarity matrix may be set to 1; when the position points on the lanes do not belong to the same lane line, the corresponding elements in the similarity matrix may be set to 2; the corresponding element of the position point on the non-lane in the similarity matrix may be set to 3.
For example, the loss function of the global relationship detection module may use the following expression (4), and the similarity matrix may be represented as (5):
Figure BDA0003254672730000101
Figure BDA0003254672730000102
wherein L is Globel Representing the global incidence relation among all the characteristic points in the characteristic diagram, l (i, j) representing the element of the ith row and the jth column in the similarity matrix, C ij Representing the element values, np the dimensions of the similarity matrix,
Figure BDA0003254672730000103
represents an embedding feature (embedding feature)); k 1 、K 2 Is constant and can be of any value, e.g. K 1 =1、K 2 =2。
6. Feature fusion module
In the embodiment of the application, the feature fusion module can perform feature fusion based on the output result of the global relationship detection module and the output result of the feature segmentation module, and then input the feature fusion to the lane line detection module.
7. Lane line detection module (Lane Head)
The lane line detection module may be configured to detect a confidence that a feature point in any one of the feature map slices of the feature map or the feature map is a lane line center point, determine a lane line in the feature map or the feature map slice according to the confidence, and screen out a lane line center point with a higher confidence (for example, the confidence is greater than or equal to a second threshold) through masking (masking).
As shown in fig. 8, the lane line detection model may obtain the Confidence of the lane line by using an npx 1 dimensional Confidence map (Confidence map), and screen out the lane lines with higher Confidence (for example, the Confidence is greater than or equal to a second threshold) by masking (masking) (i.e., regarding the feature points with the Confidence lower than the second threshold as the background).
The confidence loss function of the lane line center point can be represented by the following expressions (6) and (7), for example:
Figure BDA0003254672730000104
Figure BDA0003254672730000105
wherein L is exist A loss function corresponding to the existence of loss is represented, and the function can be applied to cells containing lane line center points in the feature map or the feature map fragments; l is none_exist A loss function representing the absence of a corresponding loss, which can be used to reduce the confidence value for each background in the feature map or feature map slice. If a certain characteristic point position in the characteristic diagram or the characteristic diagram fragment has a lane line central point, the confidence value of the characteristic point is approximate to 1, and if the certain characteristic point position has no lane line central point, the confidence value of the characteristic point is 0.
For example, the lane line detection module may also determine the encoding of the lane line using Np × 1 dimensional semantic prediction (semantic prediction), and determine the lane line having the same encoding through a group class (group class). Wherein the lane coding loss function is shown in the following expression (8):
Figure BDA0003254672730000111
wherein L is encode Indicating lane line coding.
For example, the lane line detection module may also adjust (fine-tune) the location of the lane line center point of each output cell in the UV coordinate system via a lane line center point location loss function. It should be noted that, in the embodiment of the present application, the UV coordinate system may use an upper left corner of a picture (including the first image, the feature map, and any feature map slice) as an origin, and the horizontal direction is a U coordinate and the vertical direction is a V coordinate.
As an example, the lane line center point position loss function may calculate a deviation from a true position using a two-norm, as shown in the following expression (9):
Figure BDA0003254672730000112
it should be noted that, in the embodiment of the present application, each functional module and the code matching module of the lane line detection network are obtained by learning and training in advance, and the learning and training process is not limited in the embodiment of the present application.
The following describes a lane line detection method according to an embodiment of the present application with reference to a method flowchart.
Fig. 9 shows a schematic flowchart of the lane line detection method according to the embodiment of the present application. The method can be realized by the lane line detection device, and the lane line detection device can be deployed on a vehicle or in a cloud server. Referring to fig. 9, the method may include the steps of:
s910: the lane line detection device acquires a feature map of the first image.
In this embodiment, the first image may be a frame image currently required to be processed in a group of image sequences, where the image sequence includes a plurality of images acquired continuously.
In an alternative design, in S910, the lane line detecting apparatus may sequentially use an image of the plurality of images as a first image, and obtain a feature map of the first image through the aforementioned neural network module.
S920: the lane line detection device determines a target feature point in the feature map.
In one example, the lane line detection apparatus may calculate, by the aforementioned target detection module, a confidence level that each feature point in the feature map is the target feature point, and determine the target feature point in the feature map according to the confidence level. For example, the target feature point may be associated with, but not limited to, any of the following locations: a lane line stop position, a lane line bifurcation position, or a lane line merge position.
S930: and the lane line detection device determines a first topological relation according to the target feature point.
In the embodiment of the application, the target feature point may be predefined according to a service requirement, a scene requirement, a real road topological relation, or the like, the target feature point may be associated with a position where a first topological relation changes, and the first topological relation may be used to indicate an association relation between lane lines in the first image. Illustratively, the service requirement may include, but is not limited to, a requirement of the aforementioned high-precision map service, or a requirement of an automatic driving service, or a requirement of a driving assistance service, and the scenario requirement may include a scenario in which the high-precision map service, or the automatic driving service, or the driving assistance service needs to be applied, including, but not limited to, a high-precision mapping service scenario, a navigation service scenario, an automatic driving service scenario, or a driving assistance service scenario. By predefining the target characteristic points, the topological relation of the lane lines can be determined in an auxiliary manner at lower processing cost, so that the related services or the accurate implementation of the related services are assisted.
In an example, when S930 is implemented, the lane line detection apparatus may perform slicing and dividing on the feature map by using the aforementioned feature segmentation module according to the position of the target feature point in the feature map to obtain at least two feature map slices, and determine the first topological relation according to codes of lane lines in the at least two feature slices by using a lane line detection module and a code matching module. Optionally, the lane line detection apparatus may further adjust the position associated with the target feature point, and adjust the code of the lane line where the target feature point is located and/or the code of the adjacent lane line.
In the embodiment of the present application, for convenience of understanding, a lane where a vehicle is currently located may be a current driving lane, and when the lane line detection device codes a detected lane line, for example, a first lane from the left of the current driving lane may be coded as-1, a second lane from the left may be coded as-2, and so on; the first lane from the right of the current driving lane is coded as 1, the second lane from the right is coded as 2, and so on, as shown in fig. 10 a.
The first situation is as follows: lane line matching between different feature map slices of a single frame image
As described above, for the first image, the feature map of the first image may be divided into at least two feature map segments by the lane line detection device according to the target feature point, and the lane line detection device may encode the lane lines identified in the at least two feature map segments respectively. As shown in fig. 10b, the cut-out position of the feature map in the lateral direction (perpendicular to the vehicle traveling direction) is indicated by a dotted line, and by cutting out in the lateral direction, the feature map of the single frame image can be divided into a plurality of feature map segments, for example, feature map segment 1 and feature map segment 2. For each feature map segment, the lane line detection device may identify lane lines in the feature map segment and encode the identified lane lines according to vehicle positions, for example, in feature map segment 1, lane lines on the left side of the lane where the vehicle is currently located may be encoded as-1 and-2, lane lines on the right side of the lane where the vehicle is currently located may be encoded as-1 and 2, in feature map segment 2, lane lines on the left side of the lane where the vehicle is currently located may be encoded as-1 and-2, and lane lines on the right side of the lane where the vehicle is currently located may be encoded as 1, 2 and 3, respectively. It should be noted that, in the embodiment of the present application, for convenience of understanding, each feature map tile is shown in a tile area corresponding to its corresponding image in fig. 10 a.
For a plurality of feature map slices corresponding to the same frame image, for lane lines not including the positions associated with the target feature points, the lane lines in the different feature map slices can be classified uniformly according to the codes of the lane lines in the different feature map slices and the lane line code matching rules of the front and rear feature map slices (namely the same code is the same lane line), and the association relationship between the lane lines in the different feature map slices is determined. For example, in fig. 10a, a lane-2 in the feature map segment 1 has an association relationship with a lane-2 in the feature map segment 2, and a lane-1 in the feature map segment 1 has an association relationship with a lane-1 in the feature map segment 2.
For the lane line including the position associated with the target feature point, after the code of the lane line including the position associated with the target feature point or the code of the adjacent lane line of the lane line is adjusted according to the position type associated with the target feature point, the association relationship between the lane lines in different feature map segments is determined according to the codes of the lane lines in different feature map segments. For example, the code of the right adjacent lane line to the right of the right lane line of the lane where the merging position and the stopping position are located is subtracted by 1, and the code of the right lane line of the lane where the diverging position is located is subtracted by 1.
As shown in fig. 10b, in the feature map segment 1, the lane line 1 and the lane line 2 include a lane line merging position associated with the target feature point, the code of the lane line 1 may remain unchanged, and the code of the lane line 2 minus 1 may be adjusted to be the code 1, so that it may be determined that the lane line 1, the lane line 2 in the feature map segment 1 and the lane line 1 in the feature map segment 2 have an association relationship. In the feature map segment 1, the lane line 3 is an adjacent lane line located on the right side of the lane line 2, and the code of the lane line 3 minus 1 can be adjusted to the lane line 2, so that it can be determined that the lane line 3 in the feature map segment 1 and the lane line 2 in the feature map segment 2 have an association relationship.
It should be understood that the code adjustment manner shown in fig. 10b is only an example, and in specific implementation, the codes of the lane lines including the position associated with the target feature point or the adjacent lane lines of the lane lines need to be adjusted according to the actual position of the vehicle in the lane and the change of the lane topology, which is not described herein again.
Case two: if a lane change behavior exists in the vehicle running process, a situation that the vehicle presses the lane line exists in the vehicle lane change process, due to the change of the position of the vehicle, the change of the encoding and matching results of the lane line in the collected single-frame image or a group of image sequences is caused, the topological relation of the lane line in different characteristic image fragments or different images is accurately obtained, and in a possible design, the lane line detection device can encode the lane line pressed by the vehicle to be 0. In this case, similar to the above case, the lane line detection apparatus needs to adjust the codes of other lane lines on the left and/or right side of the lane line 0 by using the lane line 0 as a boundary according to the vehicle traveling direction and the vehicle lane changing direction through the lane line detection module and the code matching module, and then classify different feature map segments or lane lines with the same number into the same one in different images.
As shown in fig. 10c, a vehicle travels in lane a and travels from a lane change to lane B, and during a lane change of the vehicle, the vehicle may pass through a lane line between lane a and lane B, and for a plurality of feature map segments corresponding to one frame of image collected during the lane change of the vehicle, such as a feature map segment 1, a feature map segment 2, and a lane line in a feature map segment 3 (or a plurality of images in a group of image sequences, such as image 1, image 2, and image 3), if there is a lane line code of 0, the lane line detection apparatus may adjust the code of the relevant lane line through a lane line detection module and a code matching module, for example, in a feature map segment 2 (or image 2) of fig. 10c, according to the fact that the vehicle presses lane line 0 and changes lane to lane B on the right side of lane a, the code of the lane line on the left side of lane a may be kept unchanged, and the code of the other lane lines on the right side of lane line 0 may be added by 1, and after the code adjustment, the relationship between the feature map segments in the lane line segment 2 may be determined. Alternatively, in the feature map segment 1 (or image 1) in fig. 10c, the codes of the lane lines on the left side of the lane a may be kept unchanged according to the lane change of the vehicle to the lane B on the right side of the lane a, and the codes of the other lane lines on the right side of the lane line 0 may be sequentially subtracted by 1, so that the association relationship between the lane lines in the feature map segment 1 and the feature map segment 2 may be determined after the codes are adjusted.
Similarly, for the feature map segment 2 (or image 2) and the feature map segment 3 (or image 3) in fig. 10c, since the vehicle changes lane from lane a to lane B to right, in the feature map segment 3 (or image 3), the code of the lane line on the right side of lane B may be kept unchanged, the code of the lane line on the left side of lane B is sequentially adjusted by adding 1, and after the code adjustment, the association relationship between the lane lines in the feature map segment 2 (or image 2) and the feature map segment 3 (or image 3) may be determined. Alternatively, in the feature map segment 2 (or image 2), the code of the lane line on the right side of the lane B may be kept unchanged, the code of the lane line on the left side of the lane B may be sequentially adjusted by subtracting 1, and the association relationship between the lane lines in the feature map segment 2 (or image 2) and the feature map segment 3 (or image 3) may be determined after the code adjustment. Similarly, for a group of image sequences to which the first image belongs, the lane line detection apparatus may further determine, through the lane line detection module and the code matching module, a second topological relation according to codes of lane lines in the plurality of images in the image sequence, where the second topological relation is used to indicate an association relation between the lane lines in the image sequence.
Case three: for a plurality of images in a group of image sequences, the position points on the lane may be classified according to the codes along the vehicle traveling direction, for example, the position points coded as 1 in the front and rear images all belong to lane line 1.
Case four: if there is lane change behavior during the vehicle traveling, and there is a situation of pressing the lane line during the vehicle lane change, the code of the lane line pressed by the vehicle is 0, in this case, similarly to the above second case, after adjusting the codes of the other lane lines on the left side and/or the right side of the lane line 0 according to the vehicle lane change direction and taking the lane line 0 as a boundary, the lane lines with the same number in different images in a group of image sequences are grouped into the same one. It should be noted that, for different images in a group of image sequences, the same or similar processing manner as that used in the foregoing description for different feature map segments belonging to the same frame of image may be adopted to determine the second topological relation, and for detailed implementation, reference may be made to the foregoing related description, and details are not repeated here.
In addition, in this embodiment of the application, the lane line detection apparatus may further determine, by the global relationship detection module, a similar matrix according to the feature map, where the similar matrix is used to indicate a global association relationship of each feature point in the feature map.
As shown in fig. 11, for a first image to be processed, after a feature map of the first image is obtained by a neural network module, the feature map may be input to a global relationship detection module obtained by learning and training in advance, and the global relationship detection module may output a similarity matrix corresponding to the feature map according to a position point on each lane line associated with a feature point in the feature map. The global relationship detecting module may determine the similarity matrix by using the expression (5), and the loss function uses the expression (4) and the truth matrix.
Therefore, by the lane line detection method, the lane line detection device can determine the incidence relation between the lane lines in the image according to the target feature points by analyzing the obtained environment image around the vehicle, and convert the complex lane line detection scene into the simple scene so as to improve the lane line detection efficiency. In addition, parameters which can be introduced in the detection process are reduced, and errors caused by projection and other intermediate processes are reduced, so that the robustness of the lane line detection method is improved.
In addition, when implementing the lane line detection method in S910-S930, on the vehicle side, the lane line detection apparatus may further output related information, such as lane line topology information, including but not limited to a current lane where the vehicle is located, each lane line included in a road where the current lane belongs, and a topological relationship of each lane line, on a Human-Machine Interaction (HMI) of the vehicle; obtaining high-precision map or navigation information according to the topological relation of the lane lines; and an automatic driving strategy or an auxiliary driving strategy and the like are obtained according to the lane line topological relation, so that a driver at the vehicle side can conveniently realize the driving control of the vehicle or know the automatic driving control process of the vehicle according to the related information output by the HMI.
Fig. 12a shows a schematic illustration of a vehicle interior. The HMI may be a screen (or referred to as a central control display screen or a central control screen) 102, 104, 105 of the vehicle machine, and the HMI may output a first picture in real time, where the first picture may include the lane line topology information, or a high-precision map or navigation information obtained according to the lane line topology relationship, or an automatic driving strategy or an auxiliary driving strategy obtained according to the lane line topology relationship, or the like.
By way of further example, fig. 12b shows a schematic diagram of a Head Up Display (HUD) scene to which the embodiment of the present application is applicable. Among them, the HUD technology is also called a head-up display technology, and has been increasingly widely used in the automobile field, the aerospace field, and the navigation field in recent years. The image projection device in the HUD device may project the aforementioned lane line topology information, or a high-precision map or navigation information obtained from the lane line topology relationship, or an automatic driving policy or an auxiliary driving policy obtained from the lane line topology relationship, or the like, onto the windshield, and form a virtual image directly in front of the driver's sight line by reflection from the windshield, so that the driver can see these pieces of information without lowering his head. Compare in fig. 12a instrument panel, well accuse screen etc. and need the display mode that the driver looked down and observed, HUD can't consider road conditions when having reduced the driver and looked down and observe and the driving risk that the eye pupil change that brings probably causes is a safer on-vehicle display mode that this application embodiment is suitable for. In addition, in order not to interfere with road conditions, the embodiment of the application is also suitable for Augmented Reality (AR) HUD (AR-HUD) to superpose the digital image on the real environment outside the vehicle, so that the driver obtains the visual effect of augmented reality, and the method can be used for AR navigation, adaptive cruise, lane departure early warning and the like, and the embodiment of the application does not limit the method.
The embodiment of the present application further provides a lane line detection apparatus, which is configured to execute the method performed by the lane line detection apparatus in the foregoing embodiment, and related features may refer to the foregoing method embodiment, which are not described herein again.
As shown in fig. 13, the apparatus 1300 may include: an obtaining unit 1301, configured to obtain a feature map of the first image; a first determining unit 1302, configured to determine a target feature point in the feature map; a second determining unit 1303, configured to determine a first topological relation according to the target feature point, where the target feature point is associated with a position where the first topological relation changes, and the first topological relation is used to indicate an association relation between lane lines in the first image. For a specific implementation, please refer to the detailed description in the embodiments shown in fig. 2 to fig. 12b, which is not repeated herein. It should be noted that, in the embodiment of the present application, the first determining unit 1302, the second determining unit 1303, the third determining unit mentioned above, and the fourth determining unit may be different processors, or may be the same processor, which is not limited in the embodiment of the present application.
It should be noted that, in the embodiment of the present application, the division of the unit is schematic, and is only one logic function division, and when the actual implementation is realized, another division manner may be provided. Each functional unit in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present application may substantially or partially contribute to some solutions or all or part of the solutions may be embodied in the form of a software product stored in a storage medium, and include several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In a simple embodiment, those skilled in the art will recognize that the lane line detecting device in the above embodiments may be of the form shown in fig. 14.
Apparatus 1400 as shown in fig. 14 includes at least one processor 1410, memory 1420, and optionally communication interface 1430.
The specific connection medium between the processor 1410 and the memory 1420 is not limited in the embodiments of the present application.
In the apparatus of fig. 14, a communication interface 1430 is further included, and the processor 1410 may perform data transmission through the communication interface 1430 when communicating with other devices.
When the lane line detection apparatus takes the form shown in fig. 14, the processor 1410 in fig. 14 may execute the instructions by calling a computer stored in the memory 1420, so that the apparatus 1400 may perform the method performed by the lane line detection apparatus in any of the above-described method embodiments.
The embodiment of the present application also relates to a computer program product, which when running on a computer, causes the computer to execute the steps implemented by the lane line detection apparatus.
The embodiment of the present application also relates to a computer-readable storage medium, in which program codes are stored, and when the program codes are run on a computer, the computer is caused to execute the steps implemented by the lane line detection apparatus.
The embodiments of the present application also relate to a chip system, which includes a processor for calling a computer program or computer instructions stored in a memory to cause the processor to execute the method in any of the above method embodiments.
In one possible implementation, the processor is coupled to the memory through an interface.
In one possible implementation, the system-on-chip further includes a memory having a computer program or computer instructions stored therein.
The embodiments of the present application also relate to a processor for invoking a computer program or computer instructions stored in a memory to cause the processor to perform a method as in any of the above-described method embodiments.
The processor mentioned in any of the above embodiments may be a general purpose central processing unit, a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of the program of the method in any of the above embodiments. The memory referred to anywhere above may be a read-only memory (ROM) or other type of static storage device that may store static information and instructions, a Random Access Memory (RAM), or the like.
It should be understood that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the embodiments of the present application without departing from the scope of the embodiments of the present application. Thus, if such modifications and variations of the embodiments of the present application fall within the scope of the claims of the present application and their equivalents, the present application is also intended to encompass such modifications and variations.

Claims (16)

1. A lane line detection method is characterized by comprising the following steps:
acquiring a characteristic map of a first image;
determining target feature points in the feature map;
and determining a first topological relation according to the target feature point, wherein the target feature point is associated with the position where the first topological relation changes, and the first topological relation is used for indicating the association relation between lane lines in the first image.
2. The method of claim 1, wherein determining target feature points in the feature map comprises:
calculating the confidence degree of each feature point in the feature map as the target feature point;
and determining the target characteristic point in the characteristic diagram according to the confidence degree.
3. The method according to claim 1 or 2, wherein the determining the first topological relation according to the target feature point comprises:
according to the position of the target feature point in the feature map, carrying out slice division on the feature map to obtain at least two feature map slices;
and determining the first topological relation according to the codes of the lane lines in the at least two characteristic fragments.
4. The method of claim 3, further comprising:
and adjusting the codes of the lane line where the target feature point is located and/or the adjacent lane line according to the position associated with the target feature point.
5. The method according to any one of claims 1-4, wherein the target feature point is associated with any one of the following positions: a lane line stop position, a lane line bifurcation position, or a lane line merge position.
6. The method according to any one of claims 1-5, wherein the first image belongs to a group of image sequences, the method further comprising:
and determining a second topological relation according to codes of lane lines in a plurality of images in the image sequence, wherein the second topological relation is used for indicating the association relation among the lane lines in the image sequence.
7. The method according to any one of claims 1-6, further comprising:
and determining a similarity matrix according to the feature map, wherein the similarity matrix is used for indicating the global incidence relation of each feature point in the feature map.
8. A lane line detection apparatus, comprising:
the acquisition unit is used for acquiring a feature map of the first image;
a first determining unit, configured to determine a target feature point in the feature map;
a second determining unit, configured to determine a first topological relation according to the target feature point, where the target feature point is associated with a position where the first topological relation changes, and the first topological relation is used to indicate an association relation between lane lines in the first image.
9. The apparatus of claim 8, wherein the first determining unit is configured to:
calculating the confidence degree of each feature point in the feature map as the target feature point;
and determining the target characteristic point in the characteristic diagram according to the confidence degree.
10. The apparatus according to claim 8 or 9, wherein the second determining unit is configured to:
according to the position of the target feature point in the feature map, carrying out slice division on the feature map to obtain at least two feature map slices;
and determining the first topological relation according to the codes of the lane lines in the at least two characteristic fragments.
11. The apparatus of claim 10, further comprising:
and the adjusting unit is used for adjusting the codes of the lane line where the target feature point is located and/or the adjacent lane line according to the position associated with the target feature point.
12. The apparatus according to any one of claims 8-11, wherein the target feature point is associated with any one of: a lane line stop position, a lane line bifurcation position, or a lane line merge position.
13. The apparatus according to any of claims 8-12, wherein the first image belongs to a group of image sequences, the apparatus further comprising:
and a third determining unit, configured to determine a second topological relation according to codes of lane lines in the multiple images in the image sequence, where the second topological relation is used to indicate an association relation between the lane lines in the image sequence.
14. The apparatus according to any one of claims 8-13, further comprising:
and the fourth determining unit is used for determining a similar matrix according to the feature map, wherein the similar matrix is used for indicating the global incidence relation of each feature point in the feature map.
15. A lane line detection apparatus, comprising: a processor and a memory;
the memory is used for storing programs;
the processor is configured to execute the program stored in the memory to cause the apparatus to implement the method of any of claims 1-7.
16. A computer-readable storage medium, characterized in that a program code is stored in the computer-readable storage medium, which program code, when run on a computer, causes the computer to carry out the method according to any one of claims 1-7.
CN202111055992.4A 2021-09-09 2021-09-09 Lane line detection method and device Pending CN115797899A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111055992.4A CN115797899A (en) 2021-09-09 2021-09-09 Lane line detection method and device
PCT/CN2022/116161 WO2023036032A1 (en) 2021-09-09 2022-08-31 Lane line detection method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111055992.4A CN115797899A (en) 2021-09-09 2021-09-09 Lane line detection method and device

Publications (1)

Publication Number Publication Date
CN115797899A true CN115797899A (en) 2023-03-14

Family

ID=85473195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111055992.4A Pending CN115797899A (en) 2021-09-09 2021-09-09 Lane line detection method and device

Country Status (2)

Country Link
CN (1) CN115797899A (en)
WO (1) WO2023036032A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102014220199B3 (en) * 2014-10-06 2016-01-14 Continental Automotive Gmbh Method for a motor vehicle with a camera, device and system
CN105701449B (en) * 2015-12-31 2019-04-23 百度在线网络技术(北京)有限公司 The detection method and device of lane line on road surface
US10137896B2 (en) * 2016-12-30 2018-11-27 Baidu Usa Llc Method and system for operating autonomous driving vehicles using graph-based lane change guide
US20200302662A1 (en) * 2019-03-23 2020-09-24 Uatc, Llc System and Methods for Generating High Definition Maps Using Machine-Learned Models to Analyze Topology Data Gathered From Sensors
US11738770B2 (en) * 2019-07-02 2023-08-29 Nvidia Corporation Determination of lane connectivity at traffic intersections for high definition maps

Also Published As

Publication number Publication date
WO2023036032A1 (en) 2023-03-16

Similar Documents

Publication Publication Date Title
CN109920246B (en) Collaborative local path planning method based on V2X communication and binocular vision
CN112417967B (en) Obstacle detection method, obstacle detection device, computer device, and storage medium
EP4152204A1 (en) Lane line detection method, and related apparatus
US10891864B2 (en) Obstacle warning method for vehicle
CN113223317B (en) Method, device and equipment for updating map
US20210024095A1 (en) Method and device for controlling autonomous driving of vehicle, medium, and system
US11144770B2 (en) Method and device for positioning vehicle, device, and computer readable storage medium
CN111595357B (en) Visual interface display method and device, electronic equipment and storage medium
CN111508276B (en) High-precision map-based V2X reverse overtaking early warning method, system and medium
US11148594B2 (en) Apparatus and method for around view monitoring using lidar
US20210325901A1 (en) Methods and systems for automated driving system monitoring and management
CN110647801A (en) Method and device for setting region of interest, storage medium and electronic equipment
CN112149460A (en) Obstacle detection method and device
US20210323577A1 (en) Methods and systems for managing an automated driving system of a vehicle
CN114708723B (en) Track prediction method and device
US11393227B1 (en) License plate recognition based vehicle control
JP2017173904A (en) Device for vehicle
CN110727269A (en) Vehicle control method and related product
CN113611008B (en) Vehicle driving scene acquisition method, device, equipment and medium
CN115797899A (en) Lane line detection method and device
JP2019095875A (en) Vehicle control device, vehicle control method, and program
CN112829757A (en) Automatic driving control system, server device, and storage medium storing program
CN113771845A (en) Method, device, vehicle and storage medium for predicting vehicle track
US11804131B2 (en) Communication system for determining vehicle context and intent of a target vehicle based on perceived lane of travel
EP4138048A1 (en) Training of 3d lane detection models for automotive applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination