WO2022051951A1 - 车道线检测方法、相关设备及计算机可读存储介质 - Google Patents

车道线检测方法、相关设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2022051951A1
WO2022051951A1 PCT/CN2020/114289 CN2020114289W WO2022051951A1 WO 2022051951 A1 WO2022051951 A1 WO 2022051951A1 CN 2020114289 W CN2020114289 W CN 2020114289W WO 2022051951 A1 WO2022051951 A1 WO 2022051951A1
Authority
WO
WIPO (PCT)
Prior art keywords
point
lane line
target pixel
pixel
confidence
Prior art date
Application number
PCT/CN2020/114289
Other languages
English (en)
French (fr)
Inventor
屈展
金欢
夏晗
彭凤超
杨臻
张维
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2020/114289 priority Critical patent/WO2022051951A1/zh
Priority to EP20952739.9A priority patent/EP4202759A4/en
Priority to CN202080006576.2A priority patent/CN114531913A/zh
Publication of WO2022051951A1 publication Critical patent/WO2022051951A1/zh
Priority to US18/180,274 priority patent/US20230215191A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Definitions

  • the present application relates to the technical field of intelligent transportation, and in particular, to a lane line detection method, related equipment, and a computer-readable storage medium.
  • lane line detection refers to the detection of lane lines on the road during vehicle driving to ensure that the vehicle is within the lane limit and reduce the chance of the vehicle colliding with other vehicles due to crossing the lane. Due to the diversity of driving scenes, the diversity of lane line types, and the interference caused by various environmental factors such as distance, occlusion, and illumination, and the number of lane lines in each frame of image is not fixed, it is still necessary to detect lane lines. is a challenging subject.
  • the original image is generally preprocessed (for example, edge detection) to obtain the edge information of the image, and then the edge points of the lane line are extracted by using the obtained edge information.
  • the edge points of the lane line are fitted to the lane line.
  • this method requires a large amount of computation in the process of extracting image edges, which not only consumes computing resources, but also easily leads to the problem of inaccurate detection of lane lines.
  • the above method can detect the lane line that is closer to the current position of the vehicle, it cannot accurately detect the lane line that is farther from the current position of the vehicle.
  • the present application provides a lane line detection method, related equipment and a computer-readable storage medium, which can quickly and accurately detect lane lines.
  • an embodiment of the present application provides a lane line detection method.
  • the method may include the following steps: first, acquiring an image of a lane line to be identified; then, determining candidate pixels for identifying a lane line area based on the lane line image point to obtain the candidate point set; the lane line area is the area where the lane line is located in the lane line image and the surrounding area where the lane line is located; the candidate pixel point refers to the pixel point that is most likely to fall into the lane line area; in the candidate point Select the target pixel points centrally, and obtain at least three position points associated with the target pixel point in the neighborhood; at least three position points are on the same lane line; here, the position of the target pixel point represents the existence of the lane line, and the three positions
  • the point is used to characterize the local structure of the lane line in the neighborhood of the target pixel; take the target pixel as the starting point, expand according to at least three position points associated with the target pixel, and obtain the set of
  • the lane line detection device can take the target pixel point as the starting point, obtain other instance points of the lane line obtained by extending the target pixel point according to at least three positions associated with the target pixel point, and then obtain the lane line point set.
  • the lane lines in the lane line image can be quickly and accurately identified.
  • this method can not only quickly and accurately detect the lane lines that are closer to the current position of the vehicle, but also can quickly and accurately detect the lane lines that are farther away from the current lane, that is, it can be well reflected.
  • the extension of the lane line provides the basis for automatic driving, which in turn can ensure the safety in the process of automatic driving.
  • the process of determining candidate pixel points for identifying the lane line area based on the lane line image, and obtaining the candidate point set may include: first, generating a confidence map of the lane line image; The degree map contains the confidence value of each pixel in the lane line image; the confidence value is used to represent the credibility of each pixel in the lane line image belonging to the lane line area; Candidate pixels for the line area.
  • the at least three position points include a first position point, a second position point and a third position point; the upper N rows of the pixel row where the target pixel point is located are the pixel row where the first position point is located; The pixel row where the pixel point is located is the same as the pixel row where the second position point is located; the lower M row of the pixel row where the target pixel point is located is the pixel row where the third position point is located; M and N are integers greater than 0; taking the target pixel point as the starting point,
  • the implementation process of extending according to at least three position points associated with the target pixel point may include: first, adjusting the starting position of the target pixel point according to the first offset to obtain the final position of the target pixel point; the first offset The amount is the offset from the target pixel point to the second position point; taking the final position of the target pixel point as the starting point, obtain the first lane line point extended on the target pixel point according to the second offset amount, and obtain the point at
  • the at least three position points include a fourth position point and a fifth position point; the upper N rows of the pixel row where the target pixel point is located are the pixel row where the fourth position point is located; the pixel row where the target pixel point is located The lower M is the pixel row where the fifth position point is located; M and N are integers greater than 0; taking the target pixel point as the starting point, the implementation process of expanding according to at least three position points associated with the target pixel point may include: first, Taking the starting position of the target pixel point as the starting point, obtain the third lane line point extended on the target pixel point according to the fourth offset, and obtain the fourth lane line point extended on the target pixel point according to the fifth offset.
  • Lane line point; the fourth offset is the offset from the target pixel point to the fourth position point; the fifth offset is the offset from the target pixel point to the fifth position point; then, at the third lane line point When the corresponding confidence value is greater than the first confidence value, take the third lane line point as the current target pixel point, and execute the step of acquiring the third lane line point obtained by extending the fourth offset amount on the target pixel point.
  • the fourth lane line point For the current target pixel point perform the step of acquiring the fourth lane line point obtained by the extension of the fifth offset on the target pixel point, until the confidence value corresponding to the extended fourth lane line point is not greater than the first confidence level value.
  • the pixel point is the starting point
  • the implementation process of expanding according to at least three position points associated with the target pixel point may include: taking the target pixel point as the starting point, and performing the expansion on the starting position of the target pixel point according to the sixth offset.
  • the sixth offset is the offset from the target pixel point to the second position point; according to the seventh offset, a candidate pixel point closest to the sixth position point is obtained, Obtain the seventh lane line point, and obtain the eighth lane line point extended on the seventh lane line point according to the sixth offset; obtain the distance from the eighth position point according to the eighth offset For the nearest candidate pixel point, the ninth lane line point is obtained, and the tenth lane line point obtained by extending the ninth lane line point according to the sixth offset is obtained; the sixth lane line point, the The eighth lane line point and the tenth lane line point are used to form a lane line point set corresponding to the target pixel point.
  • the implementation process of selecting the target pixel point in the candidate point set may include: first, according to the confidence map, selecting the target pixel row in the lane line image; the target pixel row is in the neighborhood where the first pixel point is located. A pixel row with the largest number of target confidence maxima; the first pixel is a pixel greater than the second confidence threshold; the target confidence maxima is greater than the second confidence threshold; The confidence value of the pixel point; the confidence value is used to represent the credibility of each pixel in the lane line image belonging to the lane line area; then, in the target pixel row, select all pixels greater than the second confidence value as the target.
  • Pixel point or, select a pixel point with a maximum reliability value in the neighborhood of the pixel point greater than the second confidence value as the target pixel point.
  • the implementation process of selecting the target pixel point in the candidate point set may include: first, according to the confidence map, selecting the target pixel row in the lane line image; the number of the second pixel point in the target pixel row is greater than Multiple pixel rows of the target value; the second pixel is a pixel greater than the second confidence threshold; the confidence map contains the confidence value of each pixel in the lane line image; the confidence value is used to characterize each pixel in the lane line image.
  • the method described in this application may further include the following steps: first, obtaining the degree of overlap between two sets of lane line points; then, Determine whether the overlap between the two-lane line point sets is greater than the target threshold. If the overlap between the two-lane line point sets is greater than the target threshold, delete any lane line point set in the two-lane line point sets.
  • the lane line detection device deletes any lane line point set in the two or two lane line point sets when it is determined that the overlap between the two lane line point sets is greater than the target threshold, which can ensure the lane line recognition. The accuracy of the results, to avoid the situation of false detection.
  • an embodiment of the present application provides a lane line detection device, the device may include: an image acquisition unit for acquiring a lane line image to be identified; a candidate pixel point unit for determining, based on the lane line image, Determine the candidate pixel points for identifying the lane line area, and obtain a candidate point set; the lane line area is the area in the lane line image that includes the location of the lane line and the surrounding area of the location of the lane line; Obtaining a location unit , used to select the target pixel in the candidate point set, and obtain at least three position points associated with the target pixel in the neighborhood; the at least three position points are on the same lane line; the expansion unit, using Taking the target pixel point as a starting point, and extending according to at least three position points associated with the target pixel point, a set of lane line points corresponding to the target pixel point is obtained.
  • the lane line detection device can take the target pixel point as the starting point, obtain other instance points of the lane line obtained by extending the target pixel point according to at least three positions associated with the target pixel point, and then obtain the lane line point set.
  • the lane lines in the lane line image can be quickly and accurately identified.
  • the unit for determining candidate pixel points is specifically configured to: generate a confidence map of the lane line image; wherein the confidence map includes the information of each pixel in the lane line image. Confidence value; the confidence value is used to represent the confidence level of each pixel in the lane line image belonging to the lane line area; candidate pixels.
  • the at least three position points include a first position point, a second position point and a third position point; the upper N rows of the pixel row where the target pixel point is located are where the first position point is located The pixel row where the target pixel is located is the same as the pixel row where the second position point is located; the lower M row of the pixel row where the target pixel point is located is the pixel row where the third position point is located; the M , N is an integer greater than 0; the expansion unit is specifically used for: adjusting the starting position of the target pixel point according to the first offset to obtain the final position of the target pixel point; the first The offset is the offset from the target pixel point to the second position point; taking the final position of the target pixel point as the starting point, obtain the extension obtained by the second offset amount on the target pixel point.
  • the first lane line point and obtain the second lane line point extended on the target pixel point according to the third offset
  • the second offset is the target pixel point to the first position point the offset
  • the third offset is the offset from the target pixel point to the third position point
  • the confidence value corresponding to the first lane line point is greater than the first confidence value
  • the step of acquiring the second lane line point obtained by extending the first offset amount and the third offset amount on the target pixel point is performed, Until the confidence value corresponding to the second lane line point obtained by expansion is not greater than the first confidence value.
  • the at least three position points include a fourth position point and a fifth position point; the upper N row of the pixel row where the target pixel point is located is the pixel row where the fourth position point is located; The lower M row of the pixel row where the target pixel is located is the pixel row where the fifth position point is located; the M and N are integers greater than 0; the expansion unit is specifically used for: The starting position is the starting point, and the third lane line point obtained by the extension of the fourth offset on the target pixel point is obtained, and the fourth lane obtained by the extension of the fifth offset on the target pixel point is obtained.
  • the fourth offset is the offset from the target pixel point to the fourth position point
  • the fifth offset is the offset from the target pixel point to the fifth position point
  • the at least three position points include a sixth position point, a seventh position point and an eighth position point; the upper N rows of the pixel row where the target pixel point is located are the sixth position point The pixel row where the target pixel is located; the pixel row where the target pixel is located is the same as the pixel row where the seventh position point is located; the lower M row of the pixel row where the target pixel point is located is the pixel row where the eighth position point is located; the M and N are integers greater than 0; the expansion unit is specifically used to: take the target pixel as a starting point, adjust the starting position of the target pixel according to the sixth offset, and obtain the sixth lane Line point; the sixth offset is the offset from the target pixel point to the seventh position point; according to the seventh offset, a candidate pixel point closest to the sixth position point is obtained, and the seventh lane line point is obtained , and obtain the eighth lane line point obtained by extending according to the sixth offset on the seventh lane line point; obtain a
  • the obtaining location unit includes a pixel point selection unit, wherein the pixel point selection unit is configured to select a target pixel row in the lane line image according to the confidence map; the The target pixel row is a pixel row with the largest number of target confidence maxima in the neighborhood where the first pixel is located; the first pixel is a pixel greater than the second confidence threshold; the target confidence maxima is greater than all the second confidence threshold; the confidence map includes the confidence value of each pixel in the lane image; the confidence value is used to indicate that each pixel in the lane image belongs to the lane area In the target pixel row, select all pixels greater than the second confidence value as the target pixel, or select a pixel that is greater than the second confidence value.
  • the pixel point with the maximum degree value is used as the target pixel point.
  • the obtaining location unit includes a pixel point selection unit, wherein the pixel point selection unit is configured to select a target pixel row in the lane line image according to the confidence map; the The target pixel row is a plurality of pixel rows whose number of second pixel points is greater than the target value; the second pixel point is a pixel point greater than a second confidence threshold; the confidence map includes each pixel in the lane line image The confidence value is used to represent the confidence level of each pixel in the lane line image belonging to the lane line area; in the target pixel row, select a value greater than the second confidence value All the pixels are the target pixels.
  • the apparatus further includes: a deduplication unit, configured to obtain the degree of overlap between two sets of lane line points; if If the degree of overlap between the two-lane line point sets is greater than the target threshold, delete any lane line point set in the two-lane line point sets.
  • a deduplication unit configured to obtain the degree of overlap between two sets of lane line points; if If the degree of overlap between the two-lane line point sets is greater than the target threshold, delete any lane line point set in the two-lane line point sets.
  • an embodiment of the present application further provides an automatic driving device, including the device according to any one of the second aspect.
  • an embodiment of the present application provides a lane line detection device.
  • the lane line detection device may include a memory and a processor, where the memory is used to store a computer program that supports the device to perform the above method, and the computer program includes program instructions , the processor is configured to invoke the program instructions to execute the method described in any one of the first aspect above.
  • an embodiment of the present application provides a chip, the chip includes a processor and a data interface, the processor reads an instruction stored in a memory through the data interface, and executes the method in the first aspect.
  • the chip may further include a memory, in which instructions are stored, the processor is configured to execute the instructions stored in the memory, and when the instructions are executed, the The processor is configured to perform part or all of the method of the first aspect.
  • the embodiments of the present application further provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and the computer program includes program instructions that, when executed by a processor, cause all The processor executes the method described in any one of the above-mentioned first aspect.
  • an embodiment of the present application further provides a computer program, the computer program includes computer software instructions, and when executed by a computer, the computer software instructions cause the computer to execute any one of the first aspect. Lane line detection method.
  • FIG. 1a is a schematic structural diagram of an automatic driving device 100 according to an embodiment of the present application.
  • FIG. 1b is a schematic structural diagram of an automatic driving system provided by an embodiment of the application.
  • FIG. 2a is a schematic diagram of a lane keeping assist function provided by an embodiment of the present application.
  • 2b is a schematic diagram of a lane departure warning function provided by an embodiment of the application.
  • FIG. 2c is a schematic diagram of cruise control of an adaptive cruise control system according to an embodiment of the present application.
  • FIG. 2d is a schematic diagram of deceleration control of an adaptive cruise control system provided by an embodiment of the application.
  • FIG. 2e is a schematic diagram of tracking control of an adaptive cruise control system according to an embodiment of the application.
  • FIG. 2f is a schematic diagram of acceleration control of an adaptive cruise control system provided by an embodiment of the application.
  • FIG. 3a is a schematic structural diagram of a system architecture 300 provided by an embodiment of the present application.
  • 3b is a schematic structural diagram of a lane line detection model provided by an embodiment of the present application.
  • FIG. 4a is a schematic flowchart of a lane line detection method provided by an embodiment of the present application.
  • 4b is a schematic diagram of a candidate pixel point provided by an embodiment of the present application.
  • FIG. 4c is a schematic diagram of a ⁇ neighborhood of a point a according to an embodiment of the present application.
  • 4d is a schematic diagram of at least three position points associated with a target pixel on the nearest lane line provided by an embodiment of the present application;
  • 4e is a schematic diagram of selecting a target pixel point according to an embodiment of the present application.
  • 4f is a schematic diagram of another selection target pixel point provided by an embodiment of the present application.
  • 4g is a schematic diagram of a target pixel point and at least three position points associated with the target pixel point in a neighborhood provided by an embodiment of the present application;
  • FIG. 4h is a schematic diagram of an operation of extending lane line points according to an embodiment of the present application.
  • 4i is a schematic diagram of an operation of extending lane line points according to an embodiment of the present application.
  • 4j is a schematic diagram of an operation of extending lane line points according to an embodiment of the present application.
  • 4k is a schematic diagram of a pixel row provided by an embodiment of the present application.
  • 5a is a schematic flowchart of another lane line detection method provided by an embodiment of the present application.
  • 5b is a schematic diagram of a lane line image provided by an embodiment of the present application.
  • 5c is a schematic diagram of a recognition result of a lane line image provided by an embodiment of the present application.
  • 5d is a schematic diagram of another identification result of a lane line image provided by an embodiment of the present application.
  • FIG. 5e is a schematic diagram of displaying a recognition result of a lane line image through a central control screen according to an embodiment of the present application
  • FIG. 6 is a schematic structural diagram of a lane line detection device provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a lane line detection device provided by an embodiment of the present application.
  • any embodiment or design approach described in the embodiments of the present application as “exemplarily” or “such as” should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as “exemplarily” or “such as” is intended to present the related concepts in a specific manner.
  • “A and/or B” means A and B, and A or B has two meanings.
  • “A, and/or B, and/or C” means any one of A, B, and C, alternatively, means any two of A, B, and C, alternatively, means A and B and C.
  • an autonomous vehicle also known as an unmanned vehicle, a computer-driven vehicle, or a wheeled mobile robot
  • an intelligent vehicle that realizes unmanned driving through a computer system.
  • self-driving vehicles rely on artificial intelligence, visual computing, radar, surveillance devices, and global positioning systems to cooperate to allow computer equipment to operate motor vehicles automatically and safely without any human active operation.
  • the lane line refers to the lane marking used to guide the driving of the vehicle.
  • lane lines are generally colored.
  • the color of the lane line may be white, yellow, etc., which is not specifically limited in this embodiment of the present application.
  • the expressive form of the lane line may be: a solid white line, a dashed white line, a solid yellow line, a dashed yellow line, a lane stop line, and the like.
  • yellow is used to distinguish lanes in different directions.
  • the single yellow line is generally used on roads within two-way 4 lanes (including bicycle lanes); the double yellow line is generally used on wider roads.
  • the vehicles on the side of the yellow dashed line can temporarily drive across the lane under the condition of ensuring safety, such as turning or overtaking; vehicles on the side of the solid yellow line are prohibited from overtaking, crossing or turn around.
  • the double yellow lines are in the form of two solid lines, it means that driving across lanes is prohibited.
  • white lines are used to distinguish different lanes in the same direction.
  • a road refers to a passage for vehicles to travel and for connecting two places.
  • a lane is a passageway for a single column of vehicles traveling in the same direction.
  • Common lanes include different types of lanes such as straight lanes, left-turn lanes, and right-turn lanes.
  • a road consists of one or more lanes.
  • a road consists of four lanes: 1 left turn lane, 2 straight lanes and 1 right turn lane.
  • a single through lane includes two lane lines.
  • the lane line detection method provided in this application can be applied to assisted driving (for example, lane keeping assist in advanced assisted driving, lane departure correction, intelligent cruise assist in advanced assisted driving), vehicle positioning scenarios, and also It can be applied to the entire automatic driving process of the vehicle to ensure the safety and smoothness of the vehicle during driving.
  • assisted driving for example, lane keeping assist in advanced assisted driving, lane departure correction, intelligent cruise assist in advanced assisted driving
  • vehicle positioning scenarios and also It can be applied to the entire automatic driving process of the vehicle to ensure the safety and smoothness of the vehicle during driving.
  • FIG. 1a is a functional block diagram of an automatic driving apparatus 100 provided by an embodiment of the present application.
  • the autonomous driving device 100 may be configured in a fully autonomous driving mode or a partially autonomous driving mode, or a manual driving mode.
  • the fully automatic driving mode can be L5, which means that all driving operations are completed by the vehicle, and the human driver does not need to maintain attention;
  • partially automatic driving The modes can be L1, L2, L3, L4, where L1 indicates that the vehicle provides driving for one of the steering wheel and acceleration and deceleration, and the human driver is responsible for the rest of the driving operations; L2 indicates that the vehicle is responsible for many of the steering wheel and acceleration and deceleration.
  • the operation provides driving, and the human driver is responsible for the rest of the driving actions;
  • L3 means that most of the driving operations are completed by the vehicle, and the human driver needs to keep their attention in case of emergency;
  • L4 means that the vehicle completes all driving operations, and the human driver No need to maintain attention, but define road and environmental conditions;
  • the human driving mode can be L0, which means the car is fully driven by the human driver.
  • the automatic driving device 100 can control itself while in the automatic driving mode, and can determine the current state of the vehicle and the surrounding environment through human operation, determine the possible behavior of at least one other vehicle in the surrounding environment, and determine The automatic driving device 100 is controlled based on the determined information with a confidence level corresponding to the possibility that the other vehicle performs the possible behavior.
  • the autonomous driving device 100 may be set to operate without human interaction.
  • the autonomous driving device 100 may include various subsystems, such as a travel system 102 , a sensing system 104 , a control system 106 , one or more peripherals 108 and a power supply 110 , a computer system 112 and a user interface 116 .
  • the autonomous driving device 110 may include more or fewer subsystems, and each subsystem may include multiple elements. Additionally, each of the subsystems and elements of the autonomous driving device 100 may be wired or wirelessly interconnected.
  • the travel system 102 may include components that provide powered motion for the autonomous driving device 100 .
  • travel system 102 may include engine 118 , energy source 119 , transmission 120 , and wheels/tires 121 .
  • the engine 118 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a gasoline engine and electric motor hybrid engine, an internal combustion engine and an air compression engine hybrid engine. In practice, the engine 118 converts the energy source 119 into mechanical energy.
  • the energy source 119 may include, but is not limited to, gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, or other power sources. Energy source 119 may also provide energy to other systems of autopilot 100 .
  • the transmission 120 may transmit the mechanical power from the engine 118 to the wheels 121 .
  • Transmission 120 may include a gearbox, a differential, and a driveshaft.
  • the transmission 120 may also include other devices, such as clutches.
  • the drive shaft includes one or more shafts that may be coupled to one or more wheels 121 .
  • the sensing system 104 may include several sensors that sense environmental information about the surroundings of the automatic driving device 100 .
  • the sensing system 104 may include a positioning system 122 (here, the positioning system may be a GPS system, a Beidou system or other positioning systems), an inertial measurement unit (IMU) 124, a radar 126, a laser measurement Distance meter 128 and camera 130 .
  • the sensing system 104 may also include sensors that are monitored for systems within the autonomous driving device 100 , eg, an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, and the like. Sensor data from one or more of these sensors can be used to detect objects and their corresponding characteristics (eg, position, shape, orientation, velocity, etc.). These detections and identifications are critical functions for the safe operation of the autonomous autonomous driving device 100 .
  • the global positioning system 122 may be used to estimate the geographic location of the automatic driving device 100 .
  • the geographic location of the autonomous driving device 100 may be estimated by the IMU 124 .
  • the IMU 124 is used to sense changes in the position and orientation of the automated driving device 100 based on inertial acceleration.
  • IMU 124 may be a combination of an accelerometer and a gyroscope.
  • the radar 126 may use radio signals to sense objects in the surrounding environment of the automatic driving device 100 .
  • radar 126 may also be used to sense the speed and/or heading of objects.
  • the laser rangefinder 128 may use laser light to sense objects in the environment where the automatic driving device 100 is located.
  • the laser rangefinder 128 may include one or more laser sources, laser scanners, and one or more monitors, among other system components.
  • the camera 130 may be used to capture multiple images of the surrounding environment of the automatic driving device 100 .
  • the camera 130 may be a still camera or a video camera, which is not specifically limited in this embodiment of the present application.
  • control system 106 may control the operation of the automatic driving device 100 and the components.
  • Control system 106 may include various elements including steering system 132, throttle 134, braking unit 136, computer vision system 140, route control system 142, and obstacle avoidance system.
  • the steering system 132 is operable to adjust the forward direction of the automatic driving device 100 .
  • it may be a steering wheel system.
  • the accelerator 134 is used to control the operating speed of the engine 118 , and thus control the speed of the automatic driving device 100 .
  • the braking unit 136 is used to control the speed of the automatic driving device 100 .
  • the braking unit 136 may use friction to slow the wheels 121 .
  • the braking unit 136 may convert the kinetic energy of the wheels 121 into electrical current.
  • the braking unit 136 may also take other forms to slow down the wheels 121 to control the speed of the automatic driving device 100 .
  • computer vision system 140 may be operable to process and analyze images captured by camera 130 in order to identify objects and/or features in the environment surrounding autonomous driving device 100 .
  • objects and/or features referred to herein may include, but are not limited to, traffic signals, road boundaries, and obstacles.
  • Computer vision system 140 may use object recognition algorithms, Structure from Motion (SFM) algorithms, visual tracking, and other computer vision techniques.
  • SFM Structure from Motion
  • the computer vision system 140 may be used to map the environment, track objects, estimate the speed of objects, and the like.
  • the route control system 142 is used to determine the driving route of the automatic driving device 100 .
  • route control system 142 may combine data from sensors, positioning system 122 , and one or more predetermined maps to determine a driving route for autonomous driving device 100 .
  • the obstacle avoidance system 144 is used to identify, evaluate and avoid or otherwise overcome potential obstacles in the environment of the automated driving device 100 .
  • An obstacle as the name suggests, is something that gets in the way or hinders.
  • the potential obstacles may include obstacles other than vehicles, pedestrians, bicycles, static objects, etc. that have a potential or direct impact on the driving of the vehicle.
  • control system 106 may additionally or alternatively include components in addition to those shown and described in Figure 1a. Alternatively, some of the components shown above can be reduced,
  • the automatic driving apparatus 100 interacts with external sensors, other vehicles, other computer systems or users through the peripheral device 108 .
  • Peripherals 108 may include wireless communication system 146 , onboard computer 148 , microphone 150 and/or speaker 152 .
  • the peripherals 108 provide a means for a user of the autonomous driving apparatus 100 to interact with the user interface 116 .
  • the onboard computer 148 may provide information to the user of the autonomous driving device 100 .
  • User interface 116 may also operate on-board computer 148 to receive user input.
  • the onboard computer 148 can be operated via a touch screen.
  • peripherals 108 may provide a means for autonomous driving device 100 to communicate with other devices in the vehicle.
  • microphone 150 may receive audio, such as voice commands or other audio input, from a user of autonomous driving device 100 .
  • speaker 150 may output audio to a user of autonomous driving device 100 .
  • the wireless communication system 146 may wirelessly communicate with one or more devices, either directly or via a communication network.
  • wireless communication system 146 may use 3G cellular communications, eg, CDMA, EVDO, GSM/GPRS, or 4G cellular communications, eg, LTE. Or 5G cellular communications.
  • the wireless communication system 146 may utilize WIFI to communicate with a wireless local area network (WLAN).
  • the wireless communication system 146 may communicate directly with the device using an infrared link, Bluetooth, or ZigBee.
  • DSRC dedicated short-range communications
  • the power supply 110 may provide power to various components of the automatic driving device 100 .
  • the power source 110 may be a rechargeable lithium-ion or lead-acid battery.
  • One or more battery packs of such batteries may be configured as a power source to provide power to various components of the autonomous driving device 100 .
  • the power source 110 and the energy source 119 may be implemented together, eg, configured together as in some all-electric vehicles.
  • Computer system 112 may include at least one processor 113 that executes instructions 115 stored in a non-transitory computer-readable storage medium such as data storage device 114 .
  • Computer system 112 may also be a plurality of computing devices that employ distributed control of individual components or subsystems of autonomous driving apparatus 100 .
  • the processor 113 may be any conventional processor, such as a commercially available central processing unit (Central Processing Unit, CPU), and may also be other general-purpose processors, digital signal processors (Digital Signal Processors, DSP), Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • FIG. 1a functionally shows a processor, memory, and other elements in the same physical enclosure, one of ordinary skill in the art would understand that the processor, computer system, or memory, or components may not be stored in the same physical enclosure. Multiple processors, computer systems or memories within a housing.
  • the memory may be a hard drive, or other storage medium located within a different physical enclosure.
  • processors or computer system will be understood to include reference to a collection of processors or computer systems or memories that may operate in parallel, or a collection of processors or computer systems or memories that may not operate in parallel.
  • some components such as the steering and deceleration components, may each have its own processor that only performs computations related to the function of a particular component.
  • the processor 113 may be located remotely from the vehicle and in wireless communication with the vehicle. In other aspects, some of the processes described herein are performed on a processor arranged within the vehicle while others are performed by a remote processor, including taking the necessary steps to perform a single operation.
  • data storage 114 may include instructions 115 (eg, program logic) executable by processor 113 to perform various functions of autonomous driving device 100 , including those described above.
  • Data storage 114 may also contain additional instructions, including sending data to, receiving data from, interacting with, and/or to one or more of travel system 102 , sensing system 104 , control system 106 , and peripherals 108 command to control.
  • data storage 114 may store data such as road maps, route information, vehicle location, direction, speed, and other vehicle data, among other information.
  • the above-described information may be used by the autonomous driving device 100 and the computer system 112 during operation of the autonomous driving device 100 in autonomous, semi-autonomous, and/or manual modes.
  • the data storage device 114 obtains environmental information of the vehicle from the sensors 104 or other components of the autonomous driving device 100 .
  • the environmental information may be, for example, lane line information, number of lanes, road boundary information, road driving parameters, traffic signals, green belt information, and whether there are pedestrians or vehicles in the environment where the vehicle is currently located.
  • the data storage device 114 may also store status information for the vehicle itself, as well as status information for other vehicles with which the vehicle interacts.
  • the state information may include, but is not limited to, the speed, acceleration, heading angle, etc. of the vehicle.
  • the vehicle obtains the distance between other vehicles and itself, the speed of other vehicles, and the like based on the speed measurement and distance measurement functions of the radar 126 .
  • the processor 113 may acquire the above-mentioned vehicle data from the data storage device 114, and determine a driving strategy that meets the safety requirements based on the environmental information where the vehicle is located.
  • the data storage device 114 may acquire the driving video captured during the vehicle driving from the sensor 104 or other components of the automatic driving device 100, and then preprocess the driving video to obtain the lane line image to be recognized. Then, in this case, the processor 113 can obtain the above-mentioned lane line image to be identified from the data storage device 114, and based on the lane line image, determine candidate pixel points for identifying the lane line area to obtain a candidate point set; then, Select the target pixel point in the candidate point set, and obtain at least three position points associated with the target pixel point in the neighborhood, and the above at least three position points are on the same lane line; The at least three position points associated with the pixel point are extended to obtain the lane line point set corresponding to the target pixel point.
  • the above implementations can provide strong support for the autonomous driving of vehicles.
  • the user interface 116 is used to provide information to or receive information from the user of the automatic driving device 100 .
  • user interface 116 may include one or more input/output devices within the set of peripheral devices 108 , eg, one or more of wireless communication system 146 , onboard computer 148 , microphone 150 , and speaker 152 .
  • computer system 112 may control the functions of autonomous driving device 100 based on input received from various subsystems (eg, travel system 102 , sensing system 104 , and control system) and from user interface 116 .
  • computer system 112 may utilize input from air control system 106 to control steering system 132 to avoid obstacles detected by sensing system 104 and obstacle avoidance system 144 .
  • computer system 112 is operable to provide control of various aspects of autonomous driving device 100 and its subsystems.
  • one or more of the above-described components may be installed or associated with the autonomous driving device 100 separately.
  • data storage device 114 may exist partially or completely separate from autonomous driving device 100 .
  • the above-described components may be communicatively coupled together in a wired and/or wireless manner.
  • the above components are just one example. In practical applications, components in the above-mentioned modules may be added or deleted according to actual needs, and FIG. 1 a should not be construed as a limitation on the embodiments of the present application.
  • a self-driving vehicle traveling on a road may recognize objects in its surroundings to determine whether to adjust the speed at which the self-driving device 100 is currently traveling.
  • the objects may be other vehicles, traffic control devices, or other types of objects.
  • each identified object may be considered independently and the speed at which the autonomous vehicle is to be adjusted may be determined based on the object's respective characteristics, eg, its current travel data, acceleration, and vehicle distance.
  • the autonomous driving apparatus 100 or a computer device associated with the autonomous driving apparatus 100 may be based on the identified objects properties and the state of the surrounding environment (e.g., traffic, rain, ice on the road, etc.) to predict the behavior of the identified objects.
  • each recognized object is dependent on the behavior of the other, so the behavior of a single recognized object can also be predicted by considering all recognized objects together.
  • the autonomous driving device 100 can adjust its speed based on the predicted behavior of the identified object.
  • the autonomous driving device 100 can determine what steady state the vehicle will need to adjust to based on the predicted behavior of the object (eg, the adjustment operation may include accelerating, decelerating, or stopping). In this process, other factors may also be considered to determine the speed of the automatic driving device 100, such as the lateral position of the automatic driving device 100 in the road being driven, the curvature of the road, the proximity of static and dynamic objects, and the like.
  • the computer device may also provide instructions to modify the steering angle of the vehicle 100 so that the self-driving vehicle follows a given trajectory and/or maintains contact with objects in the vicinity of the self-driving vehicle (eg, safe lateral and longitudinal distances for cars in adjacent lanes on the road.
  • objects in the vicinity of the self-driving vehicle eg, safe lateral and longitudinal distances for cars in adjacent lanes on the road.
  • the above-mentioned automatic driving device 100 may be a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a lawn mower, an amusement vehicle, an amusement park vehicle, construction equipment, a tram, or a golf ball.
  • Cars, trains, and trolleys, etc. are not particularly limited in the embodiments of the present application.
  • the automatic driving device 100 may further include a hardware structure and/or a software module, and the above-mentioned functions are implemented by driving of the hardware structure, the software module, or the hardware structure and the software module.
  • a certain function of the above functions is implemented in the form of a hardware structure, a software module, or a hardware structure plus a software module, depending on the specific application of the technical solution and the constraints involved.
  • FIG. 1 a introduces a functional block diagram of the automatic driving device 100 , and the automatic driving system 101 in the automatic driving device 100 is described below.
  • FIG. 1 b is a schematic structural diagram of an automatic driving system provided by an embodiment of the present application. 1a and 1b illustrate the automatic driving device 100 from different perspectives.
  • the computer system 101 in FIG. 1a is the computer system 112 in FIG. 1b.
  • computer system 101 includes processor 103 coupled to system bus 105 .
  • the processor 103 may be one or more processors, each of which may include one or more processor cores.
  • a video adapter 107 which can drive a display 109, is coupled to the system bus 105.
  • System bus 105 is coupled to input-output (I/O) bus 113 through bus bridge 111 .
  • I/O interface 115 is coupled to the I/O bus.
  • I/O interface 115 communicates with various I/O devices, such as input device 117 (eg, keyboard, mouse, touch screen, etc.), media tray 121, (eg, CD-ROM, multimedia interface, etc.).
  • Transceiver 123 which can transmit and/or receive radio communication signals
  • camera 155 which can capture sceneries and dynamic digital video images
  • external USB interface 125 external USB interface 125 .
  • the interface connected to the I/O interface 115 may be a USB interface.
  • the processor 103 may be any conventional processor, including a reduced instruction set computing (“RISC”) processor, a complex instruction set computing (“CISC”) processor, or a combination thereof.
  • the processor may be a special purpose device such as an application specific integrated circuit (“ASIC").
  • the processor 103 may be a neural network processor or a combination of a neural network processor and the above-mentioned conventional processors.
  • computer system 101 may be located remotely from the autonomous vehicle and may communicate wirelessly with autonomous vehicle 100 .
  • some of the processes described herein are performed on a processor disposed within the autonomous vehicle, others are performed by a remote processor, including taking actions required to perform a single maneuver.
  • Network interface 129 is a hardware network interface, such as a network card.
  • the network 127 may be an external network, such as the Internet, or an internal network, such as an Ethernet network or a virtual private network (VPN).
  • the network 127 may also be a wireless network, such as a WiFi network, a cellular network, or the like.
  • the hard disk drive interface is coupled to the system bus 105 .
  • the hard drive interface is connected to the hard drive.
  • System memory 135 is coupled to system bus 105 . Data running in system memory 135 may include operating system 137 and application programs 143 of computer 101 .
  • the operating system includes a shell 139 and a kernel 141 .
  • Shell 139 is an interface between the user and the kernel of the operating system.
  • the shell is the outermost layer of the operating system. The shell manages the interaction between the user and the operating system: waiting for user input, interpreting user input to the operating system, and processing various operating system output.
  • Kernel 141 consists of those parts of the operating system that manage memory, files, peripherals, and system resources. Interacting directly with hardware, the operating system kernel usually runs processes and provides inter-process communication, providing CPU time slice management, interrupts, memory management, IO management, and more.
  • Application 141 includes programs related to controlling the autonomous driving of the car, for example, programs that manage the interaction between the autonomous vehicle and road obstacles, programs that control the route or speed of the autonomous vehicle, and programs that control the interaction between the autonomous vehicle and other autonomous vehicles on the road. .
  • the application 141 also exists on the system of the deployment server 149 .
  • the computer system 101 may download the application 141 from the deploying server 149 when the application 141 needs to be executed.
  • Sensor 153 is associated with computer system 101 .
  • the sensor 153 is used to detect the environment around the computer 101 .
  • the sensor 153 can detect animals, cars, obstacles and pedestrian crossings, etc. Further sensors can also detect the environment around the above-mentioned animals, cars, obstacles and pedestrian crossings, such as: the environment around animals, for example, animals appear around other animals, weather conditions, ambient light levels, etc.
  • the sensors may be cameras, infrared sensors, chemical detectors, microphones, inertial measurement units, laser rangefinders, positioning systems, and the like. When activated, the sensor 153 senses information at preset intervals and provides the sensed information to the computer system 101 in real time.
  • the positioning system in the sensor 153 obtains the driving position of the vehicle
  • the inertial measurement unit obtains the heading angle of the vehicle
  • the camera obtains the drivable area of the vehicle and the size of the obstacle
  • the laser range finder obtains the distance between the vehicle and the obstacle.
  • the vehicle may also be referred to as a self-vehicle.
  • the processor 103 obtains the relevant data collected by the sensor 153 and the camera 155 from the hard disk drive based on the system bus 105 and the hard disk drive interface 131, and calls the automatic driving related program 147 in the application program 143 to execute the following method:
  • the lane line image to be identified; secondly, based on the above lane line image, determine the candidate pixel points for identifying the lane line area, and obtain a candidate point set; specifically, the lane line area is the location of the lane line in the lane line image.
  • the target pixel point in the candidate point set selects the target pixel point in the candidate point set, and obtain at least three position points associated with the target pixel point in the neighborhood, and the three position points are on the same lane line, here,
  • the location of the target pixel point represents the existence of the lane line, and the three position points are used to represent the local structure of the lane line in the neighborhood of the target pixel point; thus, the target pixel point can be used as the starting point, according to at least three position points associated with the target pixel point. Expand to obtain the lane line point set corresponding to the target pixel point.
  • the confidence map includes the confidence value of each pixel in the lane line image, and the confidence value is used to represent the reliability of each pixel in the lane line image belonging to the lane line area.
  • the pixel point can be calculated by calculating The probability of belonging to the lane line area;
  • the neighborhood map includes at least three position points associated with the target pixel in the neighborhood.
  • the position of the target pixel represents the existence of the lane line, and the three position points are used to represent the target pixel in the neighborhood.
  • the target pixel row can be selected in the lane line image according to the confidence map.
  • the target pixel row is the pixel row with the largest number of target confidence maxima in the neighborhood where the first pixel is located;
  • the first pixel point is a pixel point greater than the second confidence threshold;
  • the target confidence maximum value is greater than the second confidence threshold; in one example, the number of the second pixel points of the target pixel behavior is greater than the target value.
  • the second pixel point is a pixel point greater than the second confidence threshold; thus, the target pixel point can be selected from the determined target pixel row.
  • the number of target pixels can have various forms, that is, the method does not preset the number of target pixels, and can detect any number of lane lines to avoid missed detection of lane lines. Further, the method can be applied to special situations such as lane crossing and merging.
  • the computer system 101 may be located remotely from the autonomous driving device 100 and may be in wireless communication with the autonomous driving device 100 .
  • the transceiver 123 can send the automatic driving task, sensor data collected by the sensor 153 and other data to the computer system 101 ; and can also receive control instructions sent by the computer system 101 .
  • the automatic driving device can execute the control instructions from the computer system 101 received by the transceiver 123 and perform corresponding driving operations.
  • some of the processes described herein are arranged to be performed on a processor within an autonomous vehicle, others are performed by a remote processor, including taking actions required to perform a single operation.
  • the first application scenario Lane Keeping Assist System.
  • the lane keeping assist system when the car is driving on a certain road, when the car deviates from the lane centerline (travel b shown in Figure 2a), the lane keeping assist system first sends a warning signal to the driver. If the driver does not respond accordingly for a period of time and the car does not tend to return to the center line of the lane, the lane keeping assist system will send a corresponding response to the steering actuator of the vehicle through the electronic control unit (ECU, Electronic Control Unit).
  • the steering command is used to correct the driving state of the vehicle and make the vehicle return to the center line of the lane (travel a shown in Figure 2a) to drive, so as to ensure driving safety.
  • the second application scenario Lane Departure Warning System.
  • the lane departure warning system when the car is driving on a certain road, the lane departure warning system obtains vehicle dynamic parameters such as vehicle speed and steering state on the basis of lane line recognition, and judges whether the vehicle deviates from the lane.
  • vehicle dynamic parameters such as vehicle speed and steering state on the basis of lane line recognition
  • the lane departure warning system determines that the vehicle deviates from the lane (traveling direction a shown in Figure 2b)
  • it reminds the driver of the current driving state through methods such as alarming and intervention, and prompts the vehicle to return to the correct lane (as shown in Figure 2b).
  • b) in order to reduce the occurrence of traffic accidents.
  • the predicted driving direction coincides with the expected driving trajectory or the angle ⁇ between the two lines is less than a set value (for example, 1°)
  • a set value for example, 1°
  • the vehicle does not deviate
  • the driving trajectories do not overlap and the angle ⁇ between the two lines is less than a set value (for example, 1°)
  • the car has deviated from the lane. It can be understood that, to realize this function, it is necessary to judge whether the vehicle deviates from the lane, and the judgment of whether the vehicle deviates from the lane needs to identify the lane line in the image captured by the vehicle. Therefore, it is crucial to identify lane lines in images.
  • the third application scenario adaptive cruise control system.
  • the vehicle radar installed in front can continuously scan the road conditions ahead and measure the distance between the vehicle and the vehicle or obstacles in front; it can also detect the lane line in real time. , to assist the driver to better drive the vehicle on the center line of the lane. As shown in Figure 2c, if there are no other vehicles or obstacles in front of the vehicle, the vehicle cruises at a set speed.
  • the adaptive cruise control system controller calculates the vehicle data of the vehicle (for example, speed and acceleration, etc.), and then , send a deceleration control signal to reduce the speed of the vehicle.
  • the adaptive cruise control system sends a deceleration control signal to reduce the speed of the vehicle to reduce the speed of the vehicle. Make sure the distance between the two vehicles is a safe distance, and use tracking control after decelerating to the desired value.
  • the adaptive cruise control system When it is detected that the driving vehicle ahead is accelerating, the adaptive cruise control system sends an acceleration signal, and the vehicle accelerates to the desired value and uses the tracking control. As shown in Figure 2f, when the vehicle or the vehicle in front moves out of the original lane and the vehicle in front cannot be driven, the adaptive cruise control system can send an acceleration control signal to the vehicle to keep the vehicle cruising at a constant speed at a set speed .
  • the fourth application scenario vehicle positioning system.
  • fusion methods for example, fusion of absolute positioning and relative positioning
  • GPS Global Positioning System
  • inertial navigation sensors can be used to determine the basic position of the vehicle; then, the high-precision map, lidar point cloud image and camera image features are matched to determine The precise location of the vehicle.
  • lane lines are often modeled in high-precision maps as a stable road structure, and the vehicle's camera detects the lane lines in the actual driving process, and then compares them with the high-precision map. The lane lines in the match are matched, and then the positioning can be completed.
  • the embodiment of the present application provides a lane line detection model training method, and the training method is applied to the training of a specific task/prediction model (hereinafter referred to as a task model).
  • a task model can be used to train various task models constructed based on deep neural networks, including but not limited to classification models, recognition models, segmentation models, and detection models.
  • the task model (eg, the trained lane line detection model) obtained by the training method described in this application can be widely applied to various specific application scenarios such as image recognition (eg, lane keeping assist system, lane departure warning system, adaptive cruise control system, etc.) to achieve intelligent application scenarios.
  • FIG. 3 a it is a schematic structural diagram of a system architecture 300 provided by the embodiment of the present application.
  • the data collection device 340 is used to collect or generate training data.
  • the training data may be: multiple labeled images, etc.
  • each labeled image includes The label of each pixel belonging to the lane line area (for example, the label of 1 means that the pixel belongs to the lane line area; for example, the label of 0 means that the pixel point does not belong to the lane line area), and each pixel belonging to the lane line area
  • the point is at least three position points associated in the neighborhood, and the three position points are on the same lane line
  • the training data is stored in the database 330, and the training device 320 generates the target model/rule 301 based on the training data maintained in the database 330,
  • the training device 320 performs information on the lane line detection model through the above-mentioned labeled training data, until the training of the lane line detection model reaches a convergence state, and a trained lane line detection model is obtained.
  • the above-mentioned lane line detection model may be constructed based on a convolutional neural network, or other classification, regression models (such as support vector machine SVM) and the like may be used.
  • the convolutional neural network CNN adopts the encoder-decoder (Encoder-Decoder) architecture to realize the lane line detection.
  • the specific structure of the lane line detection model may be as shown in FIG. 3 b , including an Encoder module 31 , a multi-scale context module 32 and a Decoder module 33 .
  • the Encoder module 31 may include an input layer 310, a convolution layer 311 and a pooling layer 312, wherein the input layer 310 is used for receiving input data, for example, the input data is an input image; the convolution layer 311 is used for Extract the features of the input data, for example, when the input data is an image, the convolution layer 311 is used to extract the features of the input image to reduce the parameters brought by the input image; the pooling layer 312 is used to downsample the data to reduce the data quantity. For example, during image processing, through the pooling layer 312, the spatial size of the image can be reduced.
  • the input layer 310 is used for receiving input data, for example, the input data is an input image
  • the convolution layer 311 is used for Extract the features of the input data, for example, when the input data is an image, the convolution layer 311 is used to extract the features of the input image to reduce the parameters brought by the input image
  • the pooling layer 312 is used to downsample the data to reduce the data quantity
  • the pooling layer 312 may include an average pooling operator and/or a max pooling operator for sampling the input image to obtain a smaller size image.
  • the average pooling operator can calculate the pixel values in the image within a certain range to produce an average value as the result of average pooling.
  • the max pooling operator can take the pixel with the largest value within a specific range as the result of max pooling.
  • the Decoder module 33 may include a convolution layer 331, a deconvolution layer 332, a convolution layer 333, a deconvolution layer 334, and a convolution layer 335, and the Decoder module 33 can obtain a confidence map and a neighborhood map, wherein,
  • the confidence map contains the confidence value of each pixel in the lane line image, and the confidence value is used to represent the credibility of each pixel in the lane line image belonging to the lane line area;
  • the neighborhood map includes the target pixel in the neighborhood. At least three location points of , where the location of the target pixel represents the existence of a lane line, and the three location points are used to characterize the local structure of the lane line in the neighborhood of the target pixel.
  • the implementation process of training the above-mentioned lane line detection model by the training device 320 may include the following steps:
  • Step S11 obtain a training sample; the training sample includes a labeled image, and each labeled image includes a label that each pixel in the image belongs to the lane line area, and each pixel belonging to the lane line area is associated in the neighborhood. At least three position points of , and the three position points are on the same lane line.
  • the label of each pixel in the image belonging to the lane line area may be 1, indicating that the pixel belongs to the lane line area; the label of each pixel in the image belonging to the lane line area may be 0, indicating that the pixel does not belong to the lane line area. belong to the lane line area.
  • Step S12 Train the lane line detection model by using the above training samples, until the trained lane line detection model reaches a convergence state, and obtain a trained lane line detection model.
  • the above-mentioned convergence state may include a state reached by the lane line detection model after the number of times of training the lane line detection model by the training device 320 reaches a preset number of epochs (Epoch).
  • Epoch number 1
  • the training device 320 uses all the data in the training data set to train the lane line detection model once.
  • the number of times of training the lane line detection model using all the data in the training data set reaches the set Epoch number, it means that the training of the lane line detection model is completed.
  • the lane line detection model is in a state of convergence.
  • the lane line detection model can be specifically a convolutional neural network
  • the error back propagation algorithm can be used in the convolutional neural network to correct the size of the parameters in the initial model during the training process, so that the reconstruction error of the initial model is Losses are getting smaller.
  • the above-mentioned convergence state may also include the state reached by the lane line detection model when the training device 320 trains the lane line detection model to satisfy the output value of the loss function that the loss function is continuously reduced until the loss function approaches the objective function.
  • the training data maintained in the database 330 may not necessarily come from the collection of the data collection device 340, and may also be received from other devices.
  • the training device 320 may not necessarily train the target model/rule 301 completely based on the training data maintained by the database 330, and may also obtain training data from the cloud or generate training data by itself for model training. The above description should not be used as a reference to this application Limitations of Examples.
  • the target model/rule 301 trained according to the training device 320 can be applied to different systems or devices, such as the execution device 310 shown in FIG. 3a.
  • the execution device may be an in-vehicle terminal on a vehicle.
  • the execution device 310 may execute the data processing method in this embodiment of the present application, for example, the data processing method may include an image processing method and the like.
  • the execution device 310 is configured with an I/O interface 312 for data interaction with external devices.
  • the user can input data to the I/O interface 312 through the client device 340, and the input data is described in this embodiment of the present application. can include: images and videos to be recognized.
  • the implementation process for the execution device 310 to run the trained lane line detection model may include the following steps:
  • Step S21 acquiring a lane line image to be identified
  • Step S22 processing the above-mentioned lane line image to be identified by the lane line detection model, and identifying the lane line in the lane line image.
  • the execution device 310 may call the data, codes, etc. in the data storage system 370 for corresponding processing, and may also process the data, instructions, etc. obtained by corresponding processing. Stored in data storage system 370 .
  • FIG. 3a is only a schematic diagram of a system architecture provided by an embodiment of the present application, and the positional relationship among the devices, devices, modules, etc. shown in the figure does not constitute any limitation.
  • the data storage system 350 is an external memory relative to the execution device 310 , in other cases, the data storage system 350 can also be placed in the execution device 310 .
  • FIG. 4a is a lane line detection method provided by an embodiment of the present application, and the method may include but is not limited to the following steps:
  • Step S401 acquiring a lane line image to be identified.
  • the image of the lane line to be recognized can be obtained through a camera.
  • the camera may be arranged at the front end of the vehicle, and is used to capture images of lane lines during the driving process of the vehicle.
  • a lane line image refers to an image containing lane lines.
  • the lane line image may be a single image, or may be a video frame extracted from a video, or the like.
  • the image of the lane line can be sent to the vehicle terminal, so that the vehicle terminal can process the image of the lane line to obtain the lane line area.
  • the above-mentioned lane line image may be sent to the server, so that the server processes the lane line image to obtain the lane line area.
  • the server recognizes the lane line area, the lane line area is sent to the vehicle terminal, so that the vehicle terminal can realize the automatic driving function of the vehicle in combination with the automatic driving system on the vehicle.
  • Step S402 Determine candidate pixel points for identifying the lane line area based on the lane line image, and obtain a candidate point set; the lane line area is the area where the lane line is located in the lane line image and the location of the lane line the surrounding area.
  • the candidate pixel points refer to the pixel points that are most likely to fall into the lane line area.
  • the process of determining candidate pixel points for identifying the lane line area based on the lane line image, and obtaining the candidate point set may include: first, generating a confidence map of the lane line image; wherein, the confidence map Contains the confidence value of each pixel in the lane line image; the confidence value is used to represent the confidence level of each pixel in the lane line image belonging to the lane line area.
  • the probability value of its belonging to the lane line area can be calculated by formula (1):
  • the range of the probability value calculated by the above formula (1) is between 0 and 1. It can be understood that, the larger the probability value corresponding to the pixel point, the higher the credibility of the pixel point belonging to the lane line area.
  • the probability value is 1; for other pixels in the neighborhood of the pixel, the greater the distance from the center of the lane line , the smaller the probability value.
  • the dotted box in the figure shows the lane line area.
  • the lane line area may include the area where the lane line is located, and may also include the surrounding area where the lane line is located.
  • the surrounding area may be an area within a preset range from the center point of the lane line.
  • the candidate pixels are the pixels that belong to the lane line area (indicated by the black circles in the figure), and the non-candidate pixels are the pixels that do not belong to the lane line area (indicated by the white circles in the figure).
  • Step S403 select the target pixel point in the candidate point set, and obtain at least three position points associated with the target pixel point in the neighborhood; the at least three position points are on the same lane line.
  • At least three position points associated with the target pixel in the neighborhood are the position points on the lane line (lane line 2 in FIG. 4d ) closest to the target pixel point.
  • the location of the target pixel represents the existence of a lane line
  • the three position points are used to represent the local structure of the lane line in the neighborhood of the target pixel.
  • the implementation process of selecting the target pixel point in the candidate point set may include: first, according to the confidence map, selecting the target pixel row in the lane line image; the target pixel row is the target confidence level in the neighborhood where the first pixel point is located; A pixel row with the largest number of maxima; the first pixel is a pixel greater than the second confidence threshold; the target confidence maxima is greater than the second confidence threshold (for example, the second confidence value is 0.5); confidence The degree map contains the confidence value of each pixel in the lane line image; the confidence value is used to represent the credibility of each pixel in the lane line image belonging to the lane line area; in the target pixel row, select a value greater than the second confidence value All the pixels are the target pixels, or, select the pixels with the maximum value of built-in reliability in the neighborhood of the pixels greater than the second confidence value as the target pixels.
  • the acquired target pixel is more likely to represent the center position of the lane line, it provides an effective guarantee for the subsequent acquisition of the lane line point set corresponding to the target pixel.
  • the image contains three lane lines, namely lane line 1, lane line 2 and lane line 3, among which the black point is the maximum confidence point of the target, and the maximum confidence value of the target is selected.
  • One pixel row with the most dots (eg, pixel row 3 in Figure 4e) is used as the target pixel row.
  • the above target pixels can be obtained through the maximum pooling operation in the convolutional neural network.
  • the implementation process of selecting the target pixel point in the candidate point set may include: first, according to the confidence map, selecting the target pixel row in the lane line image; the number of second pixel points in the target pixel row is greater than the target value (for example, the target value is a number of pixel rows with 1/2 of the number of lane lines); the second pixel point is a pixel point greater than the second confidence threshold (for example, the second confidence value is 0.5)); the confidence map contains The confidence value of each pixel in the lane line image; the confidence value is used to represent the confidence level of each pixel in the lane line image belonging to the lane line area; then, in the target pixel row, select the value greater than the second confidence value. All pixels are target pixels.
  • the image contains three lane lines, namely lane line 1, lane line 2 and lane line 3, where the black points are the pixels that are greater than the second confidence threshold (for example, in Fig. 4f , the confidence value corresponding to pixel 1 is 1; the confidence value corresponding to pixel 2 is 0.7), and all pixels greater than the second confidence value are selected as target pixels.
  • the second confidence threshold for example, in Fig. 4f , the confidence value corresponding to pixel 1 is 1; the confidence value corresponding to pixel 2 is 0.7
  • the method for determining target pixel points described in this application can ensure that the determined multiple target pixel points cover all lane lines included in the lane line image as much as possible to avoid missed detection.
  • Step S406 taking the target pixel point as a starting point, and expanding according to at least three position points associated with the target pixel point, to obtain a set of lane line points corresponding to the target pixel point.
  • one lane line point set represents a complete lane line. It can be understood that the number of lane line point sets is the same as the number of lane lines.
  • the starting position of the target pixel is adjusted according to the first offset to obtain the final position of the target pixel; wherein, the first offset is the distance from the target pixel to the first
  • the offset of the two-position point for example, when the target pixel point is a pixel point obtained by upward expansion, the final position of the target pixel point can be expressed as: in, Represents the pixels obtained by taking the extension the abscissa of , Represents the pixels obtained by taking the expansion , ⁇ x same represents the first offset.
  • the target pixel is a pixel obtained by downward expansion in
  • Represents the pixels obtained by taking the expansion , ⁇ x same represents the first offset.
  • the precise position of the lane line can be gradually adjusted, and other lane line points can be extended from the pixel point located at the center of the lane line, which can improve the accuracy of detecting the lane line.
  • the first lane line point obtained by extending the target pixel point according to the second offset, wherein the second offset is the target The offset of the pixel point to the first position point.
  • the first lane line point can be represented as Among them, (P k ) is the pixel point, x(P k ) represents the abscissa of the pixel point P k , y(P k ) represents the ordinate of the pixel point P k , and N represents the pixel row where the target pixel is located and the The number of pixel rows between pixel rows where a position point is located.
  • a second lane line point obtained by extending the target pixel point according to a third offset is also acquired; the third offset is the offset from the target pixel point to the third position point shift.
  • the second lane line point can be represented as Among them, (P k ) is the pixel point, x(P k ) represents the abscissa of the pixel point P k , y(P k ) represents the ordinate of the pixel point P k , and M represents the pixel row where the target pixel is located and the The number of pixel rows between the pixel rows where the three-position point is located.
  • the confidence value corresponding to the first lane line point is greater than the first confidence value value
  • take the second lane line point as the current target pixel point and perform the acquisition on the target pixel point according to the first offset and the third offset.
  • the above-mentioned method can be used for expansion, and the set of lane line points corresponding to each pixel point can be obtained.
  • the extended lane line points are all pixels belonging to the lane line area, and expansion based on invalid lane line points can be avoided, so that the lane line can be quickly and accurately identified.
  • the at least three position points include a fourth position point and a fifth position point; the upper N row of the pixel row where the target pixel point is located is the pixel row where the fourth position point is located; the lower M row of the pixel row where the target pixel point is located The row is the pixel row where the fifth position point is located; M and N are integers greater than 0; take the target pixel point as the starting point, and expand according to at least three position points associated with the target pixel point, including: taking the starting position of the target pixel point As the starting point, acquire the third lane line point obtained by extending the target pixel point according to the fourth offset, wherein the fourth offset is the offset from the target pixel to the fourth position point.
  • the third lane line point can be represented as Among them, (P k ) is the pixel point, x(P k ) represents the abscissa of the pixel point P k , y(P k ) represents the ordinate of the pixel point P k , and N represents the pixel row where the target pixel is located and the The number of pixel rows between pixel rows where a position point is located.
  • the fourth lane line point extended on the target pixel point according to the fifth offset is also acquired, where the fifth offset is the offset from the target pixel to the fifth position point.
  • the fourth lane line point can be represented as Among them, (P k ) is the pixel point, x(P k ) represents the abscissa of the pixel point P k , y(P k ) represents the ordinate of the pixel point P k , and M represents the pixel row where the target pixel is located and the The number of pixel rows between the pixel rows where the three-position point is located.
  • the confidence value corresponding to the third lane line point is greater than the first confidence value value
  • the confidence value corresponding to the fourth lane line point of is not greater than the first confidence value.
  • the example point obtained by expansion is taken as an example of the second lane line point.
  • the confidence value corresponding to the second lane line point is greater than the first confidence value means: the extended obtained The second lane line point is a valid extended instance point. At this time, the valid extension instance point can be used as a starting point, and the extension can be continued to find more valid extension instance points.
  • the confidence value corresponding to the second lane line point is not greater than the first confidence value (for example, the confidence value corresponding to the second lane line point is smaller than the first confidence value, or, the confidence value corresponding to the second lane line point is equal to the first confidence value)” means: the second lane line point obtained by extension is an invalid extension instance point.
  • the deadline is extended. It should be noted that when the extended instance point exceeds the range of the lane line image (that is, the extended instance point does not belong to the lane line image), the extended instance point can be considered as an invalid extended instance point. At this point, the deadline is extended.
  • the implementation process of expanding according to at least three position points associated with the target pixel point may include: taking the target pixel point as the starting point, adjusting the starting position of the target pixel point according to the sixth offset, and obtaining the sixth lane line point; the sixth offset is the offset from the target pixel point to the second position point; obtain a candidate pixel point closest to the sixth position point according to the seventh offset, obtain the seventh lane line point, and obtain The eighth lane line point obtained by extending the seventh lane line point according to the sixth offset; obtaining a candidate pixel point closest to the eighth position point according to the eighth offset, obtaining the ninth lane line point, and Obtain the tenth lane line point obtained by extending the ninth lane line
  • the target pixel point in the second pixel row, take the target pixel point as the starting point, adjust the starting position of the target pixel point in the same pixel row according to the sixth offset, and obtain the position (eg, the sixth lane line point as shown in Figure 4i). Then, in the upper N lines, obtain a candidate pixel point (for example, the seventh lane line point as shown in Fig. 4i) that is closest to the sixth position point according to the seventh offset, and on this candidate pixel point, The eighth lane line point is obtained by extending according to the sixth offset.
  • a candidate pixel point for example, the seventh lane line point as shown in Fig. 4i
  • the eighth lane line point is obtained by extending according to the sixth offset.
  • a candidate pixel point for example, the ninth lane line point as shown in The offset is expanded to obtain the tenth lane line point, so that the lane line point set corresponding to the target pixel point can be obtained. It can be understood that, in the case that the number of target pixel points is multiple, the above method can be used to obtain the lane line point set corresponding to each pixel point in a parallel manner.
  • the implementation process of expanding according to at least three position points associated with the target pixel point may include: Taking the target pixel point as the starting point, obtain a candidate pixel point closest to the ninth position point according to the ninth offset, and obtain the eleventh lane line point; according to the tenth offset, obtain the closest candidate to the tenth position point. A candidate pixel point to get the twelfth lane line point.
  • a candidate pixel point closest to the ninth position point is obtained according to the ninth offset (for example, as shown in Figure 4j
  • the eleventh lane line point of the The set of lane line points corresponding to the pixel points can be understood that, in the case that the number of target pixel points is multiple, the above method can be used to obtain the lane line point set corresponding to each pixel point in a parallel manner.
  • M and N may take the same numerical value, or may take different numerical values.
  • a dotted line represents a pixel row, and adjacent pixel rows can be separated by 10 pixels, 3 pixels, or a certain value of pixels, and a certain value is the set value
  • a value within the upper and lower fluctuation range for example, 50%
  • the method described above in this application can not only quickly and accurately detect the lane lines that are closer to the current position of the vehicle, but also can quickly and accurately detect the lane lines that are farther from the current distance of the lane, that is, It can well reflect the extension of the lane line, provide a basis for automatic driving, and then ensure the safety in the process of automatic driving.
  • the lane line detection device can take the target pixel point as the starting point, obtain other instance points of the lane line obtained by extending the target pixel point according to at least three positions associated with the target pixel point, and then obtain the lane line point set.
  • the lane lines in the lane line image can be quickly and accurately identified.
  • the above method is used to obtain the lane line point set (composed of the lane line point set corresponding to each target pixel point) that may contain duplicate lane line points, as shown in Figure 5a.
  • the method embodiment illustrates how to remove duplicate lane line points, which may include but not limited to the following steps:
  • Step S408 Obtain the degree of overlap between the two-lane line point sets.
  • the degree of overlap between two lane line point sets is calculated.
  • the score of each lane line point set can be determined according to the confidence value corresponding to all the pixel points in each lane line point set, and then, according to the score pair
  • the above lane line point set is sorted, for example, it can be sorted according to the score from large to small, or it can be sorted according to the score from small to large; after that, when the score is greater than the set threshold (for example, the set threshold can be the confidence Calculate the degree of overlap between the two lane line point sets.
  • the overlapping degree IOU of the two lane line point sets is defined as follows:
  • IOU(Inst 1 ,Inst 2 ) ⁇ Inst 1 ⁇ Inst 2 ⁇ / ⁇ [Inst 1 ]+[Inst 2 ]-[Inst 1 ⁇ Inst 2 ] ⁇
  • Inst 1 represents lane line point 1
  • Inst 2 represents lane line point 2.
  • the number of rows of the pixel row where they are located that is, the ordinate (y) is the same, and the difference between the abscissa (x) is less than the set threshold, then , it can be considered that the two lane lines coincide in this pixel row.
  • the number of overlapping lane line points in the two-lane line point sets can be obtained, and then the degree of overlap between the two-two lane line point sets is determined according to the number of the overlapping lane line points.
  • Step S4010 Determine whether the degree of overlap between the two-lane line point sets is greater than the target threshold, and if so, perform step S4012.
  • the value of the target threshold is not specifically limited.
  • the target threshold may be set autonomously by the lane line detection device, or may be set by the lane line detection device according to the user's needs.
  • the target threshold is 0.85.
  • Step S4012 delete any lane line point set in the two or two lane line point sets.
  • the lane line image is shown in Figure 5b, and the lane line image includes three lane lines, namely, lane line 1, lane line 2, and lane line 3. Since the determined target pixel points come from multiple pixel rows, the set of lane line points corresponding to each target pixel point can be shown in Figure 5c. In Figure 5c, it can be seen that there are two identification results of lane line 2, which are Lane line 21 and lane line 22, in this case, delete any one of the two-lane line point sets. At this time, the recognition result of the lane line image can be shown in Figure 5d.
  • the lane line detection device deletes any lane line point set in the two or two lane line point sets when it is determined that the overlap between the two lane line point sets is greater than the target threshold, which can ensure the lane line recognition. The accuracy of the results, to avoid the situation of false detection.
  • the identification of the lane line image can be displayed on the central control screen 501 of the vehicle.
  • the automatic driving device or the driver can drive according to the above recognition result.
  • FIG. 6 is a schematic structural diagram of a lane line detection device 60 according to an embodiment of the present application.
  • the lane line detection device 60 shown in FIG. 6 may include:
  • Determining candidate pixel point unit 602 is used to determine candidate pixel points for identifying the lane line area based on the lane line image to obtain a candidate point set;
  • the lane line area is the area where the lane line is located in the lane line image and the location of the lane line the surrounding area;
  • the expansion unit 606 is configured to take the target pixel point as a starting point, perform expansion according to at least three position points associated with the target pixel point, and obtain a set of lane line points corresponding to the target pixel point.
  • the determining candidate pixel point unit 602 is specifically used for:
  • the confidence map includes the confidence value of each pixel in the lane image; the confidence value is used to characterize each pixel in the lane image the reliability of the area belonging to the lane line;
  • candidate pixel points for identifying the lane line area are determined.
  • the at least three position points include a first position point, a second position point and a third position point; the upper N rows of the pixel row where the target pixel point is located are the first position point The pixel row where the target pixel point is located; the pixel row where the target pixel point is located is the same as the pixel row where the second position point is located; the lower M row of the pixel row where the target pixel point is located is the pixel row where the third position point is located; the M and N are integers greater than 0; the expansion unit 606 is specifically used for:
  • the starting position of the target pixel is adjusted according to the first offset to obtain the final position of the target pixel; the first offset is the distance from the target pixel to the second position. Offset;
  • the first lane line point obtained by extending the target pixel according to the second offset obtains the target pixel according to the third offset
  • the second lane line point obtained by extension is the offset from the target pixel point to the first position point
  • the third offset is the target pixel point to the The offset of the third position point
  • the confidence value corresponding to the first lane line point is greater than the first confidence value value
  • the confidence value corresponding to the second lane line point is greater than the first confidence value value
  • the obtaining location unit 604 includes a pixel point selection unit, wherein,
  • the pixel point selection unit is used to select a target pixel row in the lane line image according to the confidence map; the target pixel row is a pixel row with the largest number of target confidence maxima in the neighborhood where the first pixel point is located ;
  • the first pixel is a pixel greater than the second confidence threshold; the target confidence maximum value is greater than the second confidence threshold; the confidence atlas includes each pixel in the lane line image
  • the confidence value of ; the confidence value is used to represent the credibility of each pixel in the lane line image belonging to the lane line area;
  • the target pixel row select all pixels greater than the second confidence value as the target pixel, or select a pixel with a maximum value of built-in confidence in the neighborhood of the pixel greater than the second confidence value
  • the pixel point is used as the target pixel point.
  • the obtaining location unit 604 includes a pixel point selection unit, wherein,
  • the pixel point selection unit is used to select a target pixel row in the lane line image according to the confidence map;
  • the target pixel row is a plurality of pixel rows whose number of second pixel points is greater than the target value;
  • the second pixel row The pixel points are pixels greater than the second confidence threshold;
  • the confidence map includes the confidence values of each pixel in the lane line image;
  • the confidence value is used to represent each pixel in the lane image the reliability of the area belonging to the lane line;
  • the apparatus 60 further includes: a deduplication unit 608, configured to: obtain the overlap between two sets of lane line points degree; if the degree of overlap between the two-lane line point sets is greater than the target threshold, delete any lane line point set in the two-lane line point sets.
  • a deduplication unit 608 configured to: obtain the overlap between two sets of lane line points degree; if the degree of overlap between the two-lane line point sets is greater than the target threshold, delete any lane line point set in the two-lane line point sets.
  • the lane line detection device can take the target pixel point as the starting point, obtain other instance points of the lane line obtained by extending the target pixel point according to at least three positions associated with the target pixel point, and then obtain the lane line point set.
  • the lane lines in the lane line image can be quickly and accurately identified.
  • a lane line detection device 70 may include a processor 701, a memory 702, a communication bus 703, and a communication interface 704, and the processor 701 is connected to the memory through the communication bus 702 and communication interface 703.
  • the processor 701 may adopt a general-purpose central processing unit (Central Processing Unit, CPU), a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a graphics processor (Graphics Processing Unit, GPU), a neural network processor (Network Processing Unit, NPU) or one or more integrated circuits for executing a related program to execute the lane line detection method described in the method embodiment of the present application.
  • CPU Central Processing Unit
  • ASIC Application Specific Integrated Circuit
  • GPU Graphics Processing Unit
  • NPU neural network processor
  • the processor 701 can also be an integrated circuit chip, which has signal processing capability. In the implementation process, each step of the lane line detection method of the present application may be completed by an integrated logic circuit of hardware in the processor 701 or instructions in the form of software.
  • the above-mentioned processor 701 can also be a general-purpose processor, a digital signal processor (Digital Signal Processing, DSP), an application-specific integrated circuit (ASIC), an off-the-shelf programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic devices. , discrete gate or transistor logic devices, discrete hardware components.
  • DSP Digital Signal Processing
  • ASIC application-specific integrated circuit
  • FPGA Field Programmable Gate Array
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the steps of the methods disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software module may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory 701, and the processor 701 reads the information in the memory 702, and executes the lane line detection method of the method embodiment of the present application in combination with its hardware.
  • the memory 702 may be a read only memory (Read Only Memory, ROM), a static storage device, a dynamic storage device, or a random access memory (Random Access Memory, RAM).
  • the memory 702 may store programs and data, for example, programs of the lane line detection method in the embodiments of the present application, and the like.
  • the processor 701 and the communication interface 704 are used to execute each step of the lane line detection method of the embodiment of the present application.
  • the program in the embodiment of the present application for implementing the lane line detection method in the embodiment of the present application for implementing the lane line detection method in the embodiment of the present application, and the like.
  • the communication interface 704 uses a transceiver such as, but not limited to, a transceiver to implement communication between the lane line detection device 700 and other devices or a communication network.
  • a transceiver such as, but not limited to, a transceiver to implement communication between the lane line detection device 700 and other devices or a communication network.
  • the trained neural network can be obtained through the communication interface 704 to realize information interaction with execution equipment, client equipment, user equipment or terminal equipment.
  • the lane line detection device may further include an artificial intelligence processor 705, and the artificial intelligence processor 705 may be a neural network processor (Network Processing Unit, NPU), a tensor processor (Tensor Processing Unit, TPU), or Graphics Processing Unit (GPU) and other processors suitable for large-scale XOR processing.
  • the artificial intelligence processor 705 can be mounted on the main CPU (Host CPU) as a co-processor, and the main CPU assigns tasks to it.
  • the artificial intelligence processor 705 may implement one or more operations involved in the above lane line detection method. For example, taking the NPU as an example, the core part of the NPU is an arithmetic circuit, and the controller controls the arithmetic circuit to extract the matrix data in the memory 702 and perform multiplication and addition operations.
  • the processor 701 is used to call data and program codes in the memory, and execute:
  • the lane line area is the area in the lane line image that includes the location of the lane line and the location of the lane line the surrounding area;
  • a set of lane line points corresponding to the target pixel point is obtained.
  • the processor 701 determines candidate pixel points for identifying the lane line area based on the lane line image, and obtains a candidate point set, including:
  • the confidence map includes the confidence value of each pixel in the lane image; the confidence value is used to characterize each pixel in the lane image the reliability of the area belonging to the lane line;
  • candidate pixel points for identifying the lane line area are determined.
  • the at least three position points include a first position point, a second position point and a third position point; the upper N row of the pixel row where the target pixel point is located is the pixel row where the first position point is located; the The pixel row where the target pixel point is located is the same as the pixel row where the second position point is located; the lower M row of the pixel row where the target pixel point is located is the pixel row where the third position point is located; the M and N are greater than 0.
  • the processor 701 takes the target pixel as a starting point, and expands according to at least three position points associated with the target pixel, including:
  • the starting position of the target pixel is adjusted according to the first offset to obtain the final position of the target pixel; the first offset is the distance from the target pixel to the second position. Offset;
  • the first lane line point obtained by extending the target pixel according to the second offset obtains the target pixel according to the third offset
  • the second lane line point obtained by extension is the offset from the target pixel point to the first position point
  • the third offset is the target pixel point to the The offset of the third position point
  • the confidence value corresponding to the first lane line point is greater than the first confidence value value
  • the confidence value corresponding to the second lane line point is greater than the first confidence value value
  • the at least three position points include a fourth position point and a fifth position point; the upper N row of the pixel row where the target pixel point is located is the pixel row where the fourth position point is located; the pixel where the target pixel point is located The lower M row of the row is the pixel row where the fifth position point is located; the M and N are integers greater than 0; the processor 701 takes the target pixel point as a starting point, and according to the at least Three location points to expand, including:
  • the third lane line point obtained by the fourth offset expansion on the target pixel obtains the fifth offset on the target pixel
  • the confidence value corresponding to the third lane line point is greater than the first confidence value value
  • the confidence value corresponding to the fourth lane line point is greater than the first confidence value value
  • the at least three position points include a sixth position point, a seventh position point and an eighth position point; the upper N row of the pixel row where the target pixel point is located is the pixel row where the sixth position point is located; the The pixel row where the target pixel point is located is the same as the pixel row where the seventh position point is located; the lower M row of the pixel row where the target pixel point is located is the pixel row where the eighth position point is located; the M and N are greater than 0.
  • the processor 701 takes the target pixel as a starting point, and expands according to at least three position points associated with the target pixel, including:
  • the sixth offset is the distance from the target pixel point to the seventh position point. Offset
  • the processor 701 selects the target pixel point in the candidate point set, including:
  • the confidence map select a target pixel row in the lane line image; the target pixel row is a pixel row with the largest number of target confidence maxima in the neighborhood where the first pixel point is located; the first pixel point is greater than The pixel point of the second confidence threshold; the maximum confidence value of the target is greater than the second confidence threshold; the confidence map includes the confidence value of each pixel in the lane line image; the confidence The value is used to represent the confidence level of each pixel in the lane line image belonging to the lane line area;
  • the target pixel row select all pixels greater than the second confidence value as the target pixel, or select a pixel with a maximum value of built-in confidence in the neighborhood of the pixel greater than the second confidence value
  • the pixel point is used as the target pixel point.
  • the processor 701 selects the target pixel point in the candidate point set, including:
  • the confidence map select a target pixel row in the lane line image; the target pixel row is a plurality of pixel rows whose number of second pixel points is greater than the target value; the second pixel point is greater than the second confidence threshold
  • the confidence map includes the confidence value of each pixel in the lane image; the confidence value is used to represent the credibility of each pixel in the lane image belonging to the lane area. degree;
  • the processor 701 can also be used to:
  • Embodiments of the present application further provide a computer-readable storage medium, where the computer-readable storage medium is used to implement a computer program for convolution of two high-order digits based on the Karatsuba algorithm, and the computer program enables an electronic device to execute the above method. Part or all of the steps of any of the convolution operations described in the example.
  • Embodiments of the present application further provide a computer program product, the computer program product comprising a non-transitory computer-readable storage medium storing a computer program, the computer program being operable to cause an electronic device to execute the method described in the foregoing method embodiments Some or all of the steps of any convolution method.
  • Computer-readable media may include computer-readable storage media, which corresponds to tangible media, such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another (eg, according to a communication protocol) .
  • a computer-readable medium may generally correspond to (1) a non-transitory tangible computer-readable storage medium, or (2) a communication medium, such as a signal or carrier wave.
  • Data storage media can be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementing the techniques described in this application.
  • the computer program product may comprise a computer-readable medium.
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .

Abstract

一种车道线检测方法、相关设备及计算机可读存储介质,该方法可以包括如下步骤:首先,获取待识别的车道线图像(S400);然后,基于车道线图像,确定用于识别车道线区域的候选像素点,得到候选点集(S402);车道线区域为车道线图像中车道线所在位置的区域以及车道线所在位置的周边区域;之后,在候选点集中选择目标像素点,并获取目标像素点在邻域内关联的至少三个位置点;至少三个位置点在同一条车道线上(S404);最后,以目标像素点为起点,根据目标像素点关联的至少三个位置点进行扩展,获取目标像素点对应的车道线点集(S406)。实施本方法,可以快速、准确地识别出车道线图像中的车道线。

Description

车道线检测方法、相关设备及计算机可读存储介质 技术领域
本申请涉及智能交通技术领域,尤其涉及一种车道线检测方法、相关设备及计算机可读存储介质。
背景技术
近年来,随着生活水平及科技水平的提高,越来越多的学者对智能车辆技术展开了深入的研究。如,高级驾驶辅助系统(ADAS,Advanced Driving Assistant System)、自动驾驶系统(ADS,Autonomous Driving System)等,而车道线检测技术对这二者来说,至关重要。具体来说,车道线检测是指,在车辆行驶过程中检测出道路上的车道线,以确保车辆处于车道限制之内,减少车辆因越过车道而与其他车辆发生碰撞的机会。由于驾驶场景的多样性、车道线类型的多样性以及距离、遮挡、光照等多种环境因素带来的干扰等,且每帧图像中存在的车道线数量并不固定,对车道线进行检测仍然是一个具有挑战性的课题。
目前已有的车道线检测技术研究成果中,一般是先对原始图像做预处理(例如,边缘检测)以得到图像的边缘信息,再利用得到的边缘信息提取出车道线的边缘点,最后通过车道线边缘点拟合出车道线。然而,这种方法在提取图像边缘的过程中需要较大的计算量,不仅消耗计算资源,还容易带来车道线检测不准确地问题。进一步地,虽然上述方法可以地检测出距离车辆当前位置较近的车道线,但是不能准确地检测出距离车辆当前位置较远的车道线。而在车道线检测的应用场景中,需要快速、准确地检测出车道线来辅助驾驶人员进行决策,以避免交通事故的发生。因此,如何快速、准确地检测出车道线,是亟需解决的技术问题。
发明内容
本申请提供了一种车道线检测方法、相关设备及计算机可读存储介质,可以快速、准确地检测出车道线。
第一方面,本申请实施例提供一种车道线检测方法,该方法可以包括如下步骤:首先,获取待识别的车道线图像;然后,基于车道线图像,确定用于识别车道线区域的候选像素点,得到候选点集;车道线区域为车道线图像中车道线所在位置的区域以及车道线所在位置的周边区域;候选像素点是指极大可能落入车道线区域的像素点;在候选点集中选择目标像素点,并获取目标像素点在邻域内关联的至少三个位置点;至少三个位置点在同一条车道线上;这里,目标像素点所在位置代表了存在车道线,三个位置点用于表征目标像素点邻域内车道线的局部结构;以目标像素点为起点,根据目标像素点关联的至少三个位置点进行扩展,获取目标像素点对应的车道线点集。
实施本申请实施例,由于车道线检测设备可以以目标像素点为起点,根据目标像素点关联的至少三个位置获取目标像素点扩展得到的车道线其他实例点,之后,获取车道线点集。通过这一实现方式,可以快速、准确地识别出车道线图像中的车道线。更进一步地,该方法不仅可以快速、准确地检测出距离车辆当前位置较近的车道线,还可以快速、准确 地检测出距离车道当前距离较远的车道线,也即可以很好的体现出车道线的延伸性,为自动驾驶提供了基础,进而可以保证自动驾驶过程中的安全性。
在一种可能的实现方式中,基于车道线图像,确定用于识别车道线区域的候选像素点,得到候选点集的实现过程可以包括:首先,生成车道线图像的置信度图谱;其中,置信度图谱包含车道线图像中各个像素点的置信度值;置信度值用于表征车道线图像中各个像素点属于车道线区域的可信程度;然后,在置信度图谱中,确定用于识别车道线区域的候选像素点。
在一种可能的实现方式中,至少三个位置点包括第一位置点、第二位置点和第三位置点;目标像素点所在像素行的上N行为第一位置点所在的像素行;目标像素点所在像素行与第二位置点所在像素行相同;目标像素点所在像素行的下M行为第三位置点所在的像素行;M、N为大于0的整数;以目标像素点为起点,根据目标像素点关联的至少三个位置点进行扩展的实现过程可以包括:首先,根据第一偏移量对目标像素点的起始位置进行调整,得到目标像素点的最终位置;第一偏移量为目标像素点到第二位置点的偏移量;以目标像素点的最终位置为起点,获取在目标像素点上根据第二偏移量扩展得到的第一车道线点,并获取在目标像素点上根据第三偏移量扩展得到的第二车道线点;第二偏移量为目标像素点到第一位置点的偏移量;第三偏移量为目标像素点到第三位置点的偏移量;在第一车道线点对应的置信度值大于第一置信度值的情况下,以第一车道线点为当前目标像素点,执行获取在目标像素点上根据第一偏移量和第二偏移量扩展得到的第一车道线点的步骤,直至扩展得到的第一车道线点对应的置信度值不大于第一置信度值;在第二车道线点对应的置信度值大于第一置信度值的情况下,以第二车道线点为当前目标像素点,执行获取在目标像素点上根据第一偏移量和第三偏移量扩展得到的第二车道线点的步骤,直至扩展得到的第二车道线点对应的置信度值不大于第一置信度值。实施本申请实施例,可以保证扩展得到的车道线点均为属于车道线区域的像素点,避免根据无效车道线点进行扩展,从而可以快速、准确地识别出车道线。
在一种可能的实现方式中,至少三个位置点包括第四位置点和第五位置点;目标像素点所在像素行的上N行为第四位置点所在的像素行;目标像素点所在像素行的下M行为第五位置点所在的像素行;M、N为大于0的整数;以目标像素点为起点,根据目标像素点关联的至少三个位置点进行扩展的实现过程可以包括:首先,以目标像素点的起始位置为起点,获取在目标像素点上根据第四偏移量扩展得到的第三车道线点,并获取在目标像素点上根据第五偏移量扩展得到的第四车道线点;第四偏移量为目标像素点到第四位置点的偏移量;第五偏移量为目标像素点到第五位置点的偏移量;然后,在第三车道线点对应的置信度值大于第一置信度值的情况下,以第三车道线点为当前目标像素点,执行获取在目标像素点上根据第四偏移量扩展得到的第三车道线点的步骤,直至扩展得到的第三车道线点对应的置信度值不大于第一置信度值;在第四车道线点对应的置信度值大于第一置信度值的情况下,以第四车道线点为当前目标像素点,执行获取在目标像素点上根据第五偏移量扩展得到的第四车道线点的步骤,直至扩展得到的第四车道线点对应的置信度值不大于第一置信度值。实施本申请实施例,可以保证扩展得到的车道线点均为属于车道线区域的像素点,避免根据无效车道线点进行扩展,从而可以快速、准确地识别出车道线。
在一种可能的实现方式中,上述三个位置点包括第六位置点、第七位置点和第八位置点,目标像素点所在像素行的上N(例如,N=1)行为第六位置点所在的像素行;目标像素点所在像素行与第七位置点所在像素行相同;目标像素点所在像素行的下M(例如,M=1)行为第八位置点所在的像素行,以目标像素点为起点,根据目标像素点关联的至少三个位置点进行扩展的实现过程可以包括:以所述目标像素点为起点,根据第六偏移量对所述目标像素点的起始位置进行调整,得到第六车道线点;第六偏移量为目标像素点到第二位置点的偏移量;根据第七偏移量获取与所述第六位置点距离最近的一个候选像素点,得到第七车道线点,并获取在所述第七车道线点上根据所述第六偏移量扩展得到的第八车道线点;根据第八偏移量获取与所述第八位置点距离最近的一个候选像素点,得到第九车道线点,并获取在第九车道线点上根据所述第六偏移量扩展得到的第十车道线点;所述第六车道线点、所述第八车道线点以及所述第十车道线点用于构成所述目标像素点对应的车道线点集。
在一种可能的实现方式中,上述三个位置点包括第九位置点和第十位置点,目标像素点所在像素行的上N(例如,N=1)行为第九位置点所在的像素行;目标像素点所在像素行的下M(例如,M=1)行为第十位置点所在的像素行,以目标像素点为起点,根据目标像素点关联的至少三个位置点进行扩展的实现过程可以包括:以所述目标像素点为起点,根据第九偏移量获取与第九位置点距离最近的一个候选像素点,得到第十一车道线点;根据第十偏移量获取与所述第十位置点距离最近的一个候选像素点,得到第十二车道线点;所述第十一车道线点以及所述第十二车道线点用于构成所述目标像素点对应的车道线点集。
在一种可能的实现方式中,在候选点集中选择目标像素点的实现过程可以包括:首先,根据置信度图谱,在车道线图像中选择目标像素行;目标像素行为第一像素点所在邻域内目标置信度极大值数量最多的一个像素行;第一像素点为大于第二置信度阈值的像素点;目标置信度极大值大于第二置信度阈值;置信度图谱包含车道线图像中各个像素点的置信度值;置信度值用于表征车道线图像中各个像素点属于车道线区域的可信程度;然后,在目标像素行中,选择大于第二置信度值的所有像素点为目标像素点,或,选择大于第二置信度值的像素点邻域内置信度极大值的像素点作为目标像素点。实施本申请实施例,可以保证确定的多个目标像素点尽可能地覆盖车道线图像中包含的所有车道线,避免出现漏检。
在一种可能的实现方式中,在候选点集中选择目标像素点的实现过程可以包括:首先,根据置信度图谱,在车道线图像中选择目标像素行;目标像素行为第二像素点的数量大于目标数值的多个像素行;第二像素点为大于第二置信度阈值的像素点;置信度图谱包含车道线图像中各个像素点的置信度值;置信度值用于表征车道线图像中各个像素点属于车道线区域的可信程度;然后,在目标像素行中,选择大于第二置信度值的所有像素点为目标像素点。实施本申请实施例,可以保证确定的多个目标像素点尽可能地覆盖车道线图像中包含的所有车道线,避免出现漏检。
在一种可能的实现方式中,在目标像素点来自多个像素行的情况下,本申请描述的方法还可以包括如下步骤:首先,获取两两车道线点集之间的重叠度;然后,判断两两车道线点集之间的重叠度是否大于目标阈值,若两两车道线点集之间的重叠度大于目标阈值,删除两两车道线点集中的任一车道线点集。实施本申请实施例,车道线检测设备在确定两两车道线点集之间的重叠度大于目标阈值的情况下,删除两两车道线点集中的任一车道线 点集,可以保证车道线识别结果的准确性,避免出现误检的情形。
第二方面,本申请实施例提供一种车道线检测装置,该装置可以包括:获取图像单元,用于获取待识别的车道线图像;确定候选像素点单元,用于基于所述车道线图像,确定用于识别车道线区域的候选像素点,得到候选点集;所述车道线区域为所述车道线图像中包含车道线所在位置的区域以及所述车道线所在位置的周边区域;获取位置单元,用于在所述候选点集中选择目标像素点,并获取所述目标像素点在邻域内关联的至少三个位置点;所述至少三个位置点在同一条车道线上;扩展单元,用于以所述目标像素点为起点,根据所述目标像素点关联的至少三个位置点进行扩展,获取所述目标像素点对应的车道线点集。
实施本申请实施例,由于车道线检测设备可以以目标像素点为起点,根据目标像素点关联的至少三个位置获取目标像素点扩展得到的车道线其他实例点,之后,获取车道线点集。通过这一实现方式,可以快速、准确地识别出车道线图像中的车道线。
在一种可能的实现方式中,所述确定候选像素点单元,具体用于:生成所述车道线图像的置信度图谱;其中,所述置信度图谱包含所述车道线图像中各个像素点的置信度值;所述置信度值用于表征所述车道线图像中各个像素点属于所述车道线区域的可信程度;在所述置信度图谱中,确定用于识别所述车道线区域的候选像素点。
在一种可能实现方式中,所述至少三个位置点包括第一位置点、第二位置点和第三位置点;所述目标像素点所在像素行的上N行为所述第一位置点所在的像素行;所述目标像素点所在像素行与所述第二位置点所在像素行相同;所述目标像素点所在像素行的下M行为所述第三位置点所在的像素行;所述M、N为大于0的整数;所述扩展单元,具体用于:根据第一偏移量对所述目标像素点的起始位置进行调整,得到所述目标像素点的最终位置;所述第一偏移量为所述目标像素点到所述第二位置点的偏移量;以所述目标像素点的最终位置为起点,获取在所述目标像素点上根据第二偏移量扩展得到的第一车道线点,并获取在所述目标像素点上根据第三偏移量扩展得到的第二车道线点;所述第二偏移量为所述目标像素点到所述第一位置点的偏移量;所述第三偏移量为所述目标像素点到所述第三位置点的偏移量;在所述第一车道线点对应的置信度值大于第一置信度值的情况下,以所述第一车道线点为当前目标像素点,执行获取在所述目标像素点上根据所述第一偏移量和所述第二偏移量扩展得到的第一车道线点的步骤,直至扩展得到的第一车道线点对应的置信度值不大于所述第一置信度值;在所述第二车道线点对应的置信度值大于第一置信度值的情况下,以所述第二车道线点为当前目标像素点,执行获取在所述目标像素点上根据所述第一偏移量和所述第三偏移量扩展得到的第二车道线点的步骤,直至扩展得到的第二车道线点对应的置信度值不大于所述第一置信度值。
在一种可能的实现方式中,所述至少三个位置点包括第四位置点和第五位置点;所述目标像素点所在像素行的上N行为所述第四位置点所在的像素行;所述目标像素点所在像素行的下M行为所述第五位置点所在的像素行;所述M、N为大于0的整数;所述扩展单元,具体用于:以所述目标像素点的起始位置为起点,获取在所述目标像素点上根据第四偏移量扩展得到的第三车道线点,并获取在所述目标像素点上根据第五偏移量扩展得到的第四车道线点;所述第四偏移量为所述目标像素点到所述第四位置点的偏移量;所述第五 偏移量为所述目标像素点到所述第五位置点的偏移量;在所述第三车道线点对应的置信度值大于第一置信度值的情况下,以所述第三车道线点为当前目标像素点,执行获取在所述目标像素点上根据所述第四偏移量扩展得到的第三车道线点的步骤,直至扩展得到的第三车道线点对应的置信度值不大于所述第一置信度值;在所述第四车道线点对应的置信度值大于第一置信度值的情况下,以所述第四车道线点为当前目标像素点,执行获取在所述目标像素点上根据所述第五偏移量扩展得到的第四车道线点的步骤,直至扩展得到的第四车道线点对应的置信度值不大于所述第一置信度值。
在一种可能的实现方式中,所述至少三个位置点包括第六位置点、第七位置点和第八位置点;所述目标像素点所在像素行的上N行为所述第六位置点所在的像素行;所述目标像素点所在像素行与所述第七位置点所在像素行相同;所述目标像素点所在像素行的下M行为所述第八位置点所在的像素行;所述M、N为大于0的整数;所述扩展单元,具体用于:以所述目标像素点为起点,根据第六偏移量对所述目标像素点的起始位置进行调整,得到第六车道线点;第六偏移量为目标像素点到第七位置点的偏移量;根据第七偏移量获取与所述第六位置点距离最近的一个候选像素点,得到第七车道线点,并获取在所述第七车道线点上根据所述第六偏移量扩展得到的第八车道线点;根据第八偏移量获取与所述第八位置点距离最近的一个候选像素点,得到第九车道线点,并获取在第九车道线点上根据所述第六偏移量扩展得到的第十车道线点;所述第六车道线点、所述第八车道线点以及所述第十车道线点用于构成所述目标像素点对应的车道线点集。
在一种可能的实现方式中,所述获取位置单元包括像素点选择单元,其中,所述像素点选择单元,用于根据置信度图谱,在所述车道线图像中选择目标像素行;所述目标像素行为第一像素点所在邻域内目标置信度极大值数量最多的一个像素行;所述第一像素点为大于第二置信度阈值的像素点;所述目标置信度极大值大于所述第二置信度阈值;所述置信度图谱包含所述车道线图像中各个像素点的置信度值;所述置信度值用于表征所述车道线图像中各个像素点属于所述车道线区域的可信程度;在所述目标像素行中,选择大于所述第二置信度值的所有像素点为所述目标像素点,或,选择大于所述第二置信度值的像素点邻域内置信度极大值的像素点作为所述目标像素点。
在一种可能的实现方式中,所述获取位置单元包括像素点选择单元,其中,所述像素点选择单元,用于根据置信度图谱,在所述车道线图像中选择目标像素行;所述目标像素行为第二像素点的数量大于目标数值的多个像素行;所述第二像素点为大于第二置信度阈值的像素点;所述置信度图谱包含所述车道线图像中各个像素点的置信度值;所述置信度值用于表征所述车道线图像中各个像素点属于所述车道线区域的可信程度;在所述目标像素行中,选择大于所述第二置信度值的所有像素点为所述目标像素点。
在一种可能的实现方式中,在所述目标像素点来自多个像素行的情况下,所述装置还包括:去重单元,用于获取两两车道线点集之间的重叠度;若两两车道线点集之间的重叠度大于目标阈值,删除所述两两车道线点集中的任一车道线点集。
第三方面,本申请实施例还提供一种自动驾驶装置,包括如第二方面任一项所述的装置。
第四方面,本申请实施例提供一种车道线检测设备,该车道线检测设备可以包括存储器和处理器,所述存储器用于存储支持设备执行上述方法的计算机程序,所述计算机程序包括程序指令,所述处理器被配置用于调用所述程序指令,执行上述第一方面任一项所述的方法。
第五方面,本申请实施例提供一种芯片,所述芯片包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,执行第一方面中的方法。
可选地,作为一种实现方式,所述芯片还可以包括存储器,所述存储器中存储有指令,所述处理器用于执行所述存储器上存储的指令,当所述指令被执行时,所述处理器用于执行第一方面中的方法的部分或全部。
第六方面,本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时使所述处理器执行上述第一方面任一项所述的方法。
第七方面,本申请实施例还提供了一种计算机程序,所述计算机程序包括计算机软件指令,所述计算机软件指令当被计算机执行时使所述计算机执行如第一方面所述的任一种车道线检测方法。
附图说明
图1a为本申请实施例提供的一种自动驾驶装置100的结构示意图;
图1b为本申请实施例提供的一种自动驾驶系统的结构示意图;
图2a为本申请实施例提供的一种车道保持辅助功能的示意图;
图2b为本申请实施例提供的一种车道偏离预警功能的示意图;
图2c为本申请实施例提供的一种自适应巡航控制系统巡航控制的示意图;
图2d为本申请实施例提供的一种自适应巡航控制系统减速控制的示意图;
图2e为本申请实施例提供的一种自适应巡航控制系统跟踪控制的示意图;
图2f为本申请实施例提供的一种自适应巡航控制系统加速控制的示意图;
图3a为本申请实施例提供的一种系统架构300的结构示意图;
图3b为本申请实施例提供的一种车道线检测模型的结构示意图;
图4a为本申请实施例提供的一种车道线检测方法的流程示意图;
图4b为本申请实施例提供的一种候选像素点的示意图;
图4c为本申请实施例提供的一种点a的δ邻域的示意图;
图4d为本申请实施例提供的一种目标像素点在距离最近的车道线上关联的至少三个位置点的示意图;
图4e为本申请实施例提供的一种选择目标像素点的示意图;
图4f为本申请实施例提供的另一种选择目标像素点的示意图;
图4g为本申请实施例提供的一种目标像素点与目标像素点在邻域内关联的至少三个位置点的示意图;
图4h为本申请实施例提供的一种扩展车道线点的操作示意图;
图4i为本申请实施例提供的一种扩展车道线点的操作示意图;
图4j为本申请实施例提供的一种扩展车道线点的操作示意图;
图4k为本申请实施例提供的一种像素行的示意图;
图5a为本申请实施例提供的另一种车道线检测方法的流程示意图;
图5b为本申请实施例提供的一种车道线图像的示意图;
图5c为本申请实施例提供的一种车道线图像的识别结果的示意图;
图5d为本申请实施例提供的另一种车道线图像的识别结果的示意图;
图5e为本申请实施例提供的一种通过中控屏显示车道线图像的识别结果的示意图;
图6为本申请实施例提供的一种车道线检测装置的结构示意图;
图7为本申请实施例提供的一种车道线检测设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例进行描述。
本申请的说明书以及附图中的术语“第一”和“第二”等是用于区分不同的对象,或者用于区别对同一对象的不同处理,而不是用于描述对象的特定顺序。此外,本申请的描述中所提到的术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。例如包含了一些列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括其他没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。需要说明的是,本申请实施例中,“示例性地”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性地”或者“例如”的任何实施例或设计方法不应被解释为比其他实施例或设计方案更优地或更具优势。确切而言,使用“示例性地”或者“例如”等词旨在以具体方式呈现相关概念。在本申请实施例中,“A和/或B”表示A和B,A或B两个含义。“A,和/或B,和/或C”表示A、B、C中的任一个,或者,表示A、B、C中的任两个,或者,表示A和B和C。
为了便于更好的理解本申请所描述的技术方案,下面先解释本申请实施例所涉及的技术术语:
(1)自动驾驶车辆(Autonomous vehicles;Self-piloting automobile)
在本申请实施例中,自动驾驶车辆又称无人驾驶汽车、电脑驾驶汽车、或轮式移动机器人,是一种通过计算机系统实现无人驾驶的智能汽车。在实际应用中,自动驾驶车辆依靠人工智能、视觉计算、雷达、监控装置和全球定位系统协同合作,让计算机设备可以在没有任何人类主动的操作下,自动安全地操作机动车辆。
(2)车道线
在本申请实施例中,车道线是指,用来引导车辆驾驶的车道标线。
在实际应用中,车道线一般是带有颜色的。例如,车道线的颜色可以为白色、黄色,等等,本申请实施例对此不作具体限定。具体地,车道线的表现形态可以为:白色实线、 白色虚线、黄色实线、黄色虚线、车道停止线等等。其中,黄色用来区分不同方向的车道。单黄线一般用于双向4车道以内(包括自行车道)的道路上;双黄线一般用于较宽路面。当双黄线的表现形式为一条实线和一条虚线时,黄色虚线侧的车辆在保证安全的情况下,可以临时跨车道行驶,比如,转弯或超车;黄色实线侧的车辆禁止超车、跨越或回转。当双黄线的表现形式为两条实线时,表示禁止跨车道行驶。又例如,白线用来区分同方向的不同车道。
在本申请实施例中,道路是指供车辆行驶,用于连通两地的通道。车道是指供沿同一方向行驶的单一纵列车辆行驶的通道,常见的车道包括直行车道、左转弯车道以及右转弯车道等不同种类。一条道路中包括一条或者多条车道。例如,一条道路包括:1条左转弯车道、2条直行车道和1条右转弯车道共四条车道。例如,以单直行车道为例,单直行车道包括两条车道线。
需要说明的是,本申请提供的车道线检测方法可以应用于辅助驾驶(例如,高级辅助驾驶中的车道保持辅助、车道偏离修正、高级辅助驾驶中的智能巡航辅助)、车辆定位的场景,还可以应用于车辆的整个自动驾驶过程中,以保障车辆在驾驶过程中的安全性和平顺性。
图1a是本申请实施例提供的自动驾驶装置100的功能框图。在一些实施方式中,可以将自动驾驶装置100配置为完全自动驾驶模式或部分地自动驾驶模式,亦或是人工驾驶模式。以美国机动车工程师学会(Society of Automotive Engineer,SAE)提出的自动驾驶分级为例,完全自动驾驶模式可以为L5,表示由车辆完成所有驾驶操作,人类驾驶员无需保持注意力;部分地自动驾驶模式可以为L1、L2、L3、L4,其中,L1表示车辆对方向盘和加减速中的一项操作提供驾驶,人类驾驶员负责其余的驾驶操作;L2表示车辆对方向盘和加减速中的多项操作提供驾驶,人类驾驶员负责其余的驾驶动作;L3表示由车辆完成绝大部分驾驶操作,人类驾驶员需保持注意力集中以备不时之需;L4表示由车辆完成所有驾驶操作,人类驾驶员无需保持注意力,但限定道路和环境条件;人工驾驶模式可以为L0,表示由人类驾驶者全权驾驶汽车。
在实际应用中,自动驾驶装置100可以在处于自动驾驶模式的同时控制自身,并且可通过人为操作来确定车辆以及周边环境的当前状态,确定周边环境中的至少一个其他车辆的可能行为,并确定该其他车辆执行可能行为的可能性相对应的置信水平,基于所确定的信息来控制自动驾驶装置100。在自动驾驶装置100处于完全自动驾驶模式中时,可以将自动驾驶装置100置为在没有人交互的情况下操作。
在本申请实施例中,自动驾驶装置100可以包括多种子系统,例如,行进系统102、传感系统104、控制系统106、一个或多个外围设备108以及电源110、计算机系统112和用户接口116。在一些实现方式中,自动驾驶装置110可以包括更多或更少的子系统,并且每个子系统可以包括多个元件。另外,自动驾驶装置100的每个子系统和元件可以通过有线或无线互连。
在本申请实施例中,行进系统102可以包括为自动驾驶装置100提供动力运动的组件。在一些实现方式中,行进系统102可以包括引擎118、能量源119、传动装置120和车轮/ 轮胎121。引擎118可以是内燃引擎、电动机、空气压缩引擎或其他类型的引擎组合,例如,汽油发动机和电动机组成的混动引擎,内燃引擎和空气压缩引擎组成的混动引擎。在实际应用中,引擎118将能量源119转换成机械能量。
在本申请实施例中,能量源119可以包括但不限于:汽油、柴油、其他基于石油的燃料、丙烷、其他基于压缩气体的燃料、乙醇、太阳能电池板、电池或其他电力来源。能量源119也可以为自动驾驶装置100的其他系统提供能量。
在本申请实施例中,传动装置120可以将来自引擎118的机械动力传送到车轮121。传动装置120可以包括变速箱、差速器和驱动轴。在一些实现方式中,传动装置120还可以包括其他器件,比如离合器。其中,驱动轴包括可耦合到一个或多个车轮121的一个或多个轴。
在本申请实施例中,传感系统104可以包括感测关于自动驾驶装置100周边的环境信息的若干个传感器。例如,传感系统104可以包括定位系统122(这里,定位系统可以是GPS系统,也可以是北斗系统或者是其他定位系统)、惯性测量单元(Inertial measurement unit,IMU)124、雷达126、激光测距仪128以及相机130。传感系统104还可以包括被监视自动驾驶装置100内部系统的传感器,例如,车内空气质量监测器、燃油量表、机油温度表等。来自这些传感器中的一个或多个传感器数据可以用于检测对象及其相应特性(例如,位置、形状、方向、速度等)。这些检测和识别是自主自动驾驶装置100的安全操作的关键功能。
在本申请实施例中,全球定位系统122可用于估计自动驾驶装置100的地理位置。示例性地,可以通过IMU124估计自动驾驶装置100的地理位。具体来说,IMU124用于基于惯性加速度来感测自动驾驶装置100的位置和朝向变化。在一些实现方式中,IMU124可以是加速度计和陀螺仪的组合。
在本申请实施例中,雷达126可利用无线电信号来感测自动驾驶装置100的周边环境内的物体。在一些实现方式中,除了感测物体之外,雷达126还可以用于感测物体的速度和/或前进方向。
在本申请实施例中,激光测距仪128可利用激光来感测自动驾驶装置100所处环境中的物体。在一些实现方式中,激光测距仪128可以包括一个或多个激光源、激光扫描器以及一个或多个监测器,以及其他系统组件。
在本申请实施例中,相机130可以用于捕捉自动驾驶装置100的周边环境的多个图像。在一些实现方式中,相机130可以是静态相机或视频相机,本申请实施例不作具体限定。
在本申请实施例中,控制系统106可控制自动驾驶装置100以及组件的操作。控制系统106可包括各种元件,其中包括转向系统132、油门134、制动单元136、计算机视觉系统140、路线控制系统142以及障碍规避系统。
在本申请实施例中,转向系统132可操作来调整自动驾驶装置100的前进方向。例如,在一个实施例中可以为方向盘系统。
在本申请实施例中,油门134用于控制引擎118的操作速度,并进而控制自动驾驶装置100的速度。
在本申请实施例中,制动单元136用于控制自动驾驶装置100的速度。制动单元136 可使用摩擦力来减慢车轮121。在一些实现方式中,制动单元136可将车轮121的动能转换为电流。制动单元136也可以采取其他形式来减慢车轮121转速,从而控制自动驾驶装置100的速度。
在本申请实施例中,计算机视觉系统140可以操作来处理和分析由相机130捕捉的图像以便识别自动驾驶装置100周边环境中的物体和/或特征。在一些实现方式中,这里所提及的物体和/或特征可以包括但不限于:交通信号,道路边界和障碍物。计算机视觉系统140可使用物体识别算法、运动中恢复结构(Structure from motion,SFM)算法、视觉跟踪和其他计算机视觉技术。在一些实现方式中,计算机视觉系统140可以用于为环境绘制地图、跟踪物体、估计物体的速度等。
在本申请实施例中,路线控制系统142用于确定自动驾驶装置100的行驶路线。在一些实现方式中,路线控制系统142可结合来自传感器、定位系统122和一个或多个预定地图的数据以为自动驾驶装置100确定行驶路线。
在本申请实施例中,障碍规避系统144用于识别、评估和规避或者以其他方式越过自动驾驶装置100环境中的潜在障碍物。障碍物,顾名思义,是指起妨碍或阻碍作用的东西。示例性地,潜在障碍物可以包括除了车辆之外的其他车辆、行人、自行车、静态物体等对车辆的驾驶存在潜在或直接影响的障碍物。
可以理解的是,在一些实现方式中,控制系统106可以增加或替换地包括除了图1a所示出和描述的那些以外的组件。或者也可以减少一部分上述示出的组件,
在本申请实施例中,自动驾驶装置100通过外围设备108与外部传感器、其他车辆、其他计算机系统或用户之间进行交互。外围设备108可以包括无线通信系统146、车载电脑148、麦克风150和/或扬声器152。
在一些实现方式中,外围设备108提供自动驾驶装置100的用户与用户接口116交互的手段。例如,车载电脑148可向自动驾驶装置100的用户提供信息。用户接口116还可操作车载电脑148来接收用户的输入。车载电脑148可以通过触摸屏进行操作。在其他情况中,外围设备108可提供用于自动驾驶装置100与车内的其他设备通信的手段。例如,麦克风150可从自动驾驶装置100的用户接收音频,例如,语音命令或其他音频输入。类似地,扬声器150可向自动驾驶装置100的用户输出音频。
在本申请实施例中,无线通信系统146可以直接地或经由通信网络来与一个或多个设备无线通信。例如,无线通信系统146可使用3G蜂窝通信,例如,CDMA、EVDO、GSM/GPRS,或者4G蜂窝通信,例如,LTE。或者5G蜂窝通信。在一些实现方式中,无线通信系统146可利用WIFI与无线局域网(Wireless local area network,WLAN)通信。在一些实现方式中,无线通信系统146可利用红外链路、蓝牙或ZigBee与设备直接通信。其他无线协议,例如,各种车辆通信系统,比如无线通信系统146可包括一个或多个专用短程通信(Dedicated short-range communications,DSRC)设备,这些设备可以包括车辆和/或路边台站之间的公共和/或私有数据通信。
在本申请实施例中,电源110可向自动驾驶装置100的各种组件提供电力。在一些实现方式中,电源110可以为可充电锂离子或铅酸电池。这种电池的一个或多个电池组可被配置为电源,从而为自动驾驶装置100的各种组件提供电力。在一些实现方式中,电源110 和能量源119可一起实现,例如,如一些全电动车中那样将这二者一起配置。
在本申请实施例中,自动驾驶装置100的部分或所有功能受计算机系统112控制。计算机系统112可包括至少一个处理器113,处理器113执行存储在例如数据存储装置114这样的非暂态计算机可读存储介质中的指令115。计算机系统112还可以是采用分布式控制自动驾驶装置100的个体组件或子系统中的多个计算设备。
在一些实现方式中,处理器113可以是任何常规的处理器,诸如商业可获得的中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。尽管图1a功能性地示出了处理器、存储器和在相同物理外壳中的其他元件,但是本领域的普通技术人员应该理解该处理器、计算机系统或存储器,或者包括可以不存储在相同的物理外壳内的多个处理器、计算机系统或存储器。例如,存储器可以是硬盘驱动器,或位于不同于物理外壳内的其他存储介质。因此,对处理器或计算机系统的引用将被理解为包括对可以并行操作的处理器或计算机系统或存储器的集合的引用,或者可以不并行操作的处理器或计算机系统或存储器的集合的引用。不同于使用单一的处理器来执行此处所描述的步骤,诸如转向组件和减速组件的一些组件每个都可以具有其自己的处理器,该处理器只执行与特定组件的功能相关的计算。
在此处所描述的各个方面中,处理器113可以位于远离该车辆并且与该车辆进行无线通信。在其他方面,此处所描述的过程中的一些布置于车辆内的处理器上执行而其他则由远程处理器执行,包括采取执行单一操作的必要步骤。
在一些实现方式中,数据存储装置114可以包括指令115(例如,程序逻辑),指令115可被处理器113执行来执行自动驾驶装置100的各种功能,包括以上描述的那些功能。数据存储装置114也可包含额外的指令,包括向行进系统102、传感系统104、控制系统106和外围设备108中的一个或多个发送数据、从其接收数据、与其交互和/或对其进行控制的指令。
除了指令115以外,数据存储装置114还可存储数据,例如,道路地图、路线消息、车辆的位置、方向、速度以及其他车辆数据,以及其他信息。上述信息可在自动驾驶装置100在自主、半自主和/或手动模式操作期间被自动驾驶装置100和计算机系统112使用。
例如,数据存储装置114从传感器104或自动驾驶装置100的其他组件获取车辆的环境信息。环境信息例如可以为车辆当前所处环境中的车道线信息、车道数、道路边界信息、道路行驶参数、交通信号、绿化带信息和是否有行人、车辆等。数据存储装置114还可以存储该车辆自身的状态信息,以及与该车辆有交互的其他车辆的状态信息。状态信息可以包括但不限于:车辆的速度、加速度、航向角等。例如,车辆基于雷达126的测速、测距功能,得到其他车辆与自身之间的距离、其他车辆的速度等。那么,在这种情况下,处理器113可从数据存储装置114获取上述车辆数据,并基于车辆所处的环境信息确定满足安全需求的驾驶策略。
例如,数据存储装置114可以从传感器104或自动驾驶装置100的其他组件获取车辆行驶过程中拍摄得到的行驶视频,然后,对上述行驶视频进行预处理,得到待识别的车道 线图像。那么,在这种情况下,处理器113可从数据存储装置114获取上述待识别的车道线图像,基于车道线图像,确定用于识别车道线区域的候选像素点,得到候选点集;然后,在候选点集中选择目标像素点,并获取目标像素点在邻域内关联的至少三个位置点,上述至少三个位置点在同一条车道线上;从而,可以以目标像素点为起点,根据目标像素点关联的至少三个位置点进行扩展,获取目标像素点对应的车道线点集。总的来说,上述实现可以为车辆的自动驾驶提供强有力的支撑。
在本申请实施例中,用户接口116,用于向自动驾驶装置100的用户提供信息或从其接收信息。在一些实现方式中,用户接口116可包括外围设备108的集合内的一个或多个输入/输出设备,例如,无线通信系统146、车载电脑148、麦克风150和扬声器152中的一个或多个。
在本申请实施例中,计算机系统112可基于从各种子系统(例如,行进系统102、传感系统104和控制系统)以及从用户接口116接收的输入来控制自动驾驶装置100的功能。例如,计算机系统112可利用来气控制系统106的输入,以便控制转向系统132,从而规避由传感系统104和障碍规避系统144检测到的障碍物。在一些实现方式中,计算机系统112可操作来对自动驾驶装置100及其子系统的许多方面提供控制。
在一些实现方式中,上述组件中的一个或多个可与自动驾驶装置100分开安装或关联。例如,数据存储装置114可以部分或完全地与自动驾驶装置100分开存在。上述组件可以按有线和/或无线方式来通信地耦合在一起。
在一些实现方式中,上述组件只是一个示例。在实际应用中,上述各个模块中的组件有可能根据实际需要增添或者删除,图1a不应理解为对本申请实施例的限制。
在道路行进的自动驾驶车辆,例如,自动驾驶装置100,可以识别其周围环境内的物体以确定是否对自动驾驶装置100当前行驶的速度进行调整。这里,物体可以是其他车辆、交通控制设备、或者其他类型的物体。在一些实现方式中,可以独立地考虑每个识别的物体,并且基于物体各自的特性,例如,它的当前行驶数据、加速度与车辆间距等,来确定自动驾驶车辆所要调整的速度。
在一些实现方式中,自动驾驶装置100或者与自动驾驶装置100相关联的计算机设备(例如,如图1a所示的计算机系统112、计算机视觉系统140、数据存储装置114)可以基于所识别的物体的特性和周围环境的状态(例如,交通、雨、道路上的冰等)来预测识别的物体的行为。在一些实现方式中,每一个识别的物体都依赖于彼此的行为,因此,还可以将识别的所有物体全部一起考虑来预测单个识别的物体的行为。自动驾驶装置100能够基于预测的识别的物体的行为来调整它的速度。换句话说,自动驾驶装置100能够基于所预测的物体的行为来确定车辆将需要调整到什么样的稳定状态(例如,调整操作可以包括加速、减速或者停止)。在这个过程中,也可以考虑其他因素来确定自动驾驶装置100的速度,例如自动驾驶装置100在行驶的道路中的横向位置、道路的曲率、静态和动态物体的接近度等。
除了提供调整自动驾驶汽车的速度的指令之外,计算机设备还可以提供修改车辆100转向角的指令,以使得自动驾驶车辆遵循给定的轨迹和/或维持与自动驾驶车辆附近的物体(例如,道路上相邻的车道中的汽车)的安全横向和纵向距离。
在本申请实施例中,上述自动驾驶装置100可以为轿车、卡车、摩托车、公共汽车、船、飞机、直升飞机、割草机、娱乐车、游乐场车辆、施工设备、电车、高尔夫球车、火车、和手推车等,本申请实施例不做特别的限定。
在一些实现方式中,自动驾驶装置100还可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的行驶来实现上述功能。上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和涉及约束条件。
图1a介绍了自动驾驶装置100的功能性框图,下面介绍自动驾驶装置100中的自动驾驶系统101。图1b是本申请实施例提供的一种自动驾驶系统的结构示意图。图1a和图1b是从不同的角度来描述自动驾驶装置100,例如,图1a中的计算机系统101为图1b中的计算机系统112。如图1b所示,计算机系统101包括处理器103,处理器103和系统总线105耦合。处理器103可以是一个或者多个处理器,其中每个处理器都可以包括一个或多个处理器核。显示适配器(video adapter)107,显示适配器可以驱动显示器109,显示器109和系统总线105耦合。系统总线105通过总线桥111和输入输出(I/O)总线113耦合。I/O接口115和I/O总线耦合。I/O接口115和多种I/O设备进行通信,比如输入设备117(如:键盘,鼠标,触摸屏等),多媒体盘(media tray)121,(例如,CD-ROM,多媒体接口等)。收发器123(可以发送和/或接受无线电通信信号),摄像头155(可以捕捉景田和动态数字视频图像)和外部USB接口125。其中,可选地,和I/O接口115相连接的接口可以是USB接口。
其中,处理器103可以是任何传统处理器,包括精简指令集计算(“RISC”)处理器、复杂指令集计算(“CISC”)处理器或上述的组合。可选地,处理器可以是诸如专用集成电路(“ASIC”)的专用装置。可选地,处理器103可以是神经网络处理器或者是神经网络处理器和上述传统处理器的组合。
可选地,在本文所述的各种实施例中,计算机系统101可位于远离自动驾驶车辆的地方,并且可与自动驾驶车辆100无线通信。在其它方面,本文所述的一些过程在设置在自动驾驶车辆内的处理器上执行,其它由远程处理器执行,包括采取执行单个操纵所需的动作。
计算机101可以通过网络接口129和软件部署服务器149通信。网络接口129是硬件网络接口,比如,网卡。网络127可以是外部网络,比如因特网,也可以是内部网络,比如以太网或者虚拟私人网络(VPN)。可选地,网络127还尅是无线网络,比如WiFi网络,蜂窝网络等。
硬盘驱动接口和系统总线105耦合。硬件驱动接口和硬盘驱动器相连接。系统内存135和系统总线105耦合。运行在系统内存135的数据可以包括计算机101的操作系统137和应用程序143。
操作系统包括壳(Shell)139和内核(kernel)141。Shell 139是介于使用者和操作系统之内核(kernel)间的一个接口。shell是操作系统最外面的一层。shell管理使用者与操作系统之间的交互:等待使用者的输入,向操作系统解释使用者的输入,并且处理各种各样 的操作系统的输出结果。
内核141由操作系统中用于管理存储器、文件、外设和系统资源的那些部分组成。直接与硬件交互,操作系统内核通常运行进程,并提供进程间的通信,提供CPU时间片管理、中断、内存管理、IO管理等等。
应用程序141包括控制汽车自动驾驶相关的程序,比如,管理自动驾驶的汽车和路上障碍物交互的程序,控制自动驾驶汽车路线或者速度的程序,控制自动驾驶汽车和路上其他自动驾驶汽车交互的程序。应用程序141也存在于权健部署服务器(deploying server)149的系统上。在一个实施例中,在需要执行应用程序141时,计算机系统101可以从deploying server149下载应用程序141。
传感器153和计算机系统101关联。传感器153用于探测计算机101周围的环境。举例来说,传感器153可以探测动物,汽车,障碍物和人行横道等,进一步传感器还可以探测上述动物,汽车,障碍物和人行横道等物体周围的环境,比如:动物周围的环境,例如,动物周围出现的其他动物,天气条件,周围环境的光亮度等。可选地,如果计算机101位于自动驾驶的汽车上,传感器可以是摄像头,红外线感应器,化学检测器,麦克风、惯性测量单元、激光测距仪、定位系统等。传感器153在激活时,按照预设间隔感测信息并实时地将所感测到的信息提供给计算机系统101。
例如,传感器153中的定位系统获取车辆的行驶位置,惯性测量单元获取车辆的航向角、摄像头获取车辆的可行驶区域以及障碍物的尺寸,激光测距仪获取车辆与障碍物之间的距离。
需要说明的是,在申请实施例中,车辆也可以称为自车。
处理器103通过基于系统总线105以及硬盘驱动接口131从硬盘驱动器中获取传感器153以及摄像头155采集的相关数据,调用应用程序143中的自动驾驶相关程序147执行以下方法:
获取待识别的车道线图像;其次,基于上述车道线图像,确定用于识别车道线区域的候选像素点,得到候选点集;具体来说,车道线区域为车道线图像中车道线所在位置的区域以及车道线所在位置的周边区域;然后,在候选点集中选择目标像素点,并获取目标像素点在邻域内关联的至少三个位置点,三个位置点在同一条车道线上,这里,目标像素点所在位置代表了存在车道线,三个位置点用于表征目标像素点邻域内车道线的局部结构;从而可以以目标像素点为起点,根据目标像素点关联的至少三个位置点进行扩展,获取目标像素点对应的车道线点集。
具体来说,首先,获取到待识别的车道线图像;然后,将上述待识别的车道线图像输入训练好的车道线检测模型中,通过训练好的车道线检测模型得到置信度图谱和邻域图,其中,置信度图谱包含车道线图像中各个像素点的置信度值,该置信度值用于表征车道线图像中各个像素点属于车道线区域的可信程度,例如,可以通过计算像素点属于车道线区域的概率;邻域图包括目标像素点在邻域内关联的至少三个位置点,这里,目标像素点所在位置代表了存在车道线,三个位置点用于表征目标像素点邻域内车道线的局部结构;然后,以目标像素点为起点,根据目标像素点关联的至少三个位置点进行扩展,扩展出独立的车道线点,从而可以得到目标像素点对应的车道线点集。在上述实现过程中,可以根据 置信度图谱,在车道线图像中选择目标像素行,在一个示例中,目标像素行为第一像素点所在邻域内目标置信度极大值数量最多的一个像素行;其中,第一像素点为大于第二置信度阈值的像素点;目标置信度极大值大于第二置信度阈值;在一个示例中,目标像素行为第二像素点的数量大于目标数值的多个像素行;第二像素点为大于第二置信度阈值的像素点;从而可以在上述确定好的目标像素行中选择目标像素点。可以理解的是,由于目标像素点的数量可以有多种表现形式,也即:该方法不预设目标像素点的数量,可以检测任意条数的车道线,避免车道线出现漏检。进一步地,该方法可以适用于车道交叉、汇合等特殊的情形。
可选的,在本文所述的各种实施例中,计算机系统101可位于远离自动驾驶装置100的地方,并且可以与自动驾驶装置100进行无线通信。收发器123可将自动驾驶任务、传感器153采集的传感器数据和其他数据发送给计算机系统101;还可以接收计算机系统101发送的控制指令。自动驾驶装置可执行收发器123接收的来自计算机系统101的控制指令,并执行相应的驾驶操作。在其他方面,本文所述的一些过程设置在自动驾驶车辆内的处理器上执行,其他由远程处理器执行,包括采取执行单个操作所需的动作。
为了便于更好的理解本申请,下面介绍几个本申请所描述的方法可以应用的应用场景:
第一应用场景:车道保持辅助系统。
如图2a所示,汽车行驶在某段道路中,当汽车在行驶过程中偏离车道中心线(图2a所示的行驶轨迹b)时,车道保持辅助系统首先向驾驶人员发出预警信号。如果在一段时间内驾驶员没有做出相应反应,汽车没有回到车道中心线行驶的趋势,车道保持辅助系统就会通过电子控制单元(ECU,Electronic Control Unit)向车辆的转向执行器发出相应的转向命令,以纠正车辆的行驶状态,使汽车回到车道中心线(图2a所示的行驶轨迹a)上进行行驶,从而可以保证行车安全。而这一功能的实现,需要判断车辆是否偏离车道,而对车辆是否偏离车道的判断需要通过识别车辆拍摄得到的图像中的车道线。因此,如何快速、准确地识别车道线是车道保持辅助系统中的最核心部分,识别质量的好坏直接影响整个系统的响应精度。
第二应用场景:车道偏离预警系统。
如图2b所示,汽车行驶在某段道路中,车道偏离预警系统在车道线识别的基础上,同时获取车辆速度、转向状态等车辆动态参数,并判断车辆是否偏离车道行驶。当车道偏离预警系统判定车辆偏离车道行驶(图2b所示的行驶方向a)时,通过报警、干预等方法提醒驾驶员当前的行驶状态,促使车辆回到正确的车道上行驶(图2b所示的行驶方向b),以减少交通事故的发生。示例性地,当预测的行驶方向与期望行驶轨迹重合或两线夹角α小于设定的某个数值(例如,1°)时,可以界定汽车行驶未发生偏离;当预测的行驶方向与期望行驶轨迹不重合且两线夹角α小于设定的某个数值(例如,1°),可以界定汽车已偏离车道行驶。可以理解的是,这一功能的实现,需要判断车辆是否偏离车道,而对车辆是否偏离车道的判断需要通过识别车辆拍摄得到的图像中的车道线。因此,识别图像中的车道线至关重要。
第三应用场景:自适应巡航控制系统。
汽车行驶在某段道路中,车辆打开自适应巡航控制设置按钮后,安装在前面的车辆雷达可以连续扫描前方路况,测量本车距离前面车辆或障碍物的距离;也可以对车道线进行实时检测,以辅助驾驶员更好的将车辆行驶在车道中心线上。如图2c所示,如果车辆前面没有其他车辆或障碍物,车辆在一个设定好的速度下巡航控制。如图2d所示,当本车前方出现行驶车辆或障碍物时,被车辆前方设置的雷达扫描到,自适应巡航控制系统控制器计算本车的车辆数据(例如,速度和加速度等),然后,发出减速控制信号,以降低本车速度。如图2e所示,当车辆与前方车辆的距离太小,或当前面突然插入另一车辆,并且前面的车辆速度小于本车速度时,自适应巡航控制系统发出减速控制信号,降低车辆速度以确保两车之间的距离为一个安全距离,减速至期望值后使用跟踪控制。当检测到前方的行驶车辆为加速行驶时,自适应巡航控制系统发出加速信号,车辆加速至期望值后使用跟踪控制。如图2f所示,当本车或前方行驶车辆驶出原车道而使得前方无法行驶车辆时,自适应巡航控制系统可以对车辆发出加速控制信号,使车辆保持在一个设定的速度下匀速巡航。
第四应用场景:车辆定位系统。
目前,普遍采用融合方式(例如,绝对定位和相对定位相融合)来实现自动驾驶车辆的高精度定位。首先,可以利用车辆自带的全球定位系统(GPS,Global Positioning System)和惯性导航传感器确定车辆的基本位置;然后,将高精度地图、激光雷达点云图像以及摄像头图像特征进行匹配,从而可以确定车辆的精确位置。在基于摄像头图像进行匹配的过程中,车道线常常被作为一种稳定的道路结构在高精度地图中建模,并由车辆的摄像头对实际驾驶过程中的车道线进行检测,再与高精度地图中的车道线进行匹配,进而可以完成定位。
本申请实施例提供了一种车道线检测模型训练方法,该训练方法应用于特定任务/预测模型(以下简称为任务模型)的训练。具体地,可以用于训练基于深度神经网络构建的各种任务模型,可以包括但不限于分类模型、识别模型、分割模型、检测模型。通过本申请所描述的训练方法得到的任务模型(例如,训练好的车道线检测模型)可广泛应用到图像识别等多种具体应用场景(例如,车道保持辅助系统、车道偏离预警系统、自适应巡航控制系统等),以实现应用场景的智能化。
下面介绍本申请实施例的系统架构,参见图3a,为本申请实施例提供的一种系统架构300的结构示意图。如上述系统架构300所示,数据采集设备340用于采集或生成训练数据,本申请实施例中训练数据可以为:带标签的多张图像等,例如,每张带标签的图像包括图像中每个像素点属于车道线区域的标签(例如,标签为1表示该像素点属于车道线区域;又例如,标签为0表示该像素点不属于车道线区域),以及属于车道线区域的每个像素点在邻域内关联的至少三个位置点,三个位置点在同一条车道线上;并将训练数据存入数据库330,训练设备320基于数据库330中维护的训练数据生成目标模型/规则301,例如,训练设备320通过上述带标签的训练数据对车道线检测模型进行信息,直至训练该车道线 检测模型达到收敛状态,得到训练好的车道线检测模型。
示例性地,上述车道线检测模型可以基于卷积神经网络构建,也可以使用其他分类、回归模型(例如支持向量机SVM)等等。
以车道线检测模型基于卷积神经网络构建的为例,卷积神经网络CNN采用编码器-解码器(Encoder-Decoder)架构来实现车道线的检测。示例性地,车道线检测模型的具体结构可以如图3b所示,包括Encoder模块31、multi-scale context(多尺度语境)模块32和Decoder模块33。具体地,Encoder模块31可以包括输入层310、卷积层311和池化层312,其中,输入层310,用于接收输入数据,例如,该输入数据为输入图像;卷积层311,用于提取输入数据的特征,例如,当输入数据为图像时,卷积层311用于提取输入图像的特征,以减少输入图像带来的参数;池化层312用于对数据进行下采样,降低数据的数量。例如,在图像处理过程中,通过池化层312,可以减少图像的空间大小。一般情况下,池化层312可以包括平均池化算子和/或最大池化算子,以用于对输入图像进行采样得到较小尺寸的图像。平均池化算子可以在特定范围内对图像中的像素值进行计算产生平均值作为平均池化的结果。最大池化算子可以在特定范围内取该范围内值最大的像素作为最大池化的结果。之后,通过multi-scale context模块32聚合Encoder模块31输出的多尺度上下文信息,得到的输出特征送入Decoder模块33。其中,Decoder模块33可以包括卷积层331、反卷积层332、卷积层333、反卷积层334、卷积层335,通过Decoder模块33可以得到置信度图谱和邻域图,其中,置信度图谱包含车道线图像中各个像素点的置信度值,该置信度值用于表征车道线图像中各个像素点属于车道线区域的可信程度;邻域图包括目标像素点在邻域内关联的至少三个位置点,这里,目标像素点所在位置代表了存在车道线,三个位置点用于表征目标像素点邻域内车道线的局部结构。
具体地,训练设备320训练上述车道线检测模型的实现过程可以包括如下步骤:
步骤S11、获取训练样本;该训练样本包括带标签的图像,每张带标签的图像包括图像中每个像素点属于车道线区域的标签,以及属于车道线区域的每个像素点在邻域内关联的至少三个位置点,三个位置点在同一条车道线上。示例性地,图像中每个像素点属于车道线区域的标签可以为1,表示该像素点属于车道线区域;图像中每个像素点属于车道线区域的标签可以为0,表示该像素点不属于车道线区域。
步骤S12、通过上述训练样本对车道线检测模型进行训练,直至训练车道线检测模型达到收敛状态,得到训练好的车道线检测模型。
在一个示例中,上述收敛状态可以包括训练设备320训练车道线检测模型的次数达到设定好的时期(Epoch)数量之后,车道线检测模型所达到的状态。具体来说,Epoch数为1时,表示训练设备320使用训练数据集中的全部数据对车道线检测模型进行一次训练。当使用训练数据集中的全部数据对车道线检测模型进行训练的次数达到设定好的Epoch数,这表示完成了对车道线检测模型的训练,此时,车道线检测模型处于收敛状态。
在一个示例中,考虑到车道线检测模型可以具体为卷积神经网络,而卷积神经网络中可以采用误差反向传播算法在训练过程中修正初始模型中参数的大小,使得初始模型的重建误差损失越来越小。基于此,上述收敛状态还可以包括训练设备320训练车道线检测模型满足损失函数的输出值不断缩小,直至损失函数逼近目标函数时,车道线检测模型所达 到的状态。
需要说明的是,在实际的应用中,上述数据库330中维护的训练数据不一定都来自于数据采集设备340的采集,也有可能是从其他设备接收得到的。另外需要说明的是,训练设备320也不一定完全基于数据库330维护的训练数据进行目标模型/规则301的训练,也有可能从云端获取或者自己生成训练数据进行模型训练,上述描述不应该作为对本申请实施例的限定。
根据训练设备320训练得到的目标模型/规则301可以应用于不同的系统或设备中,如应用于图3a所示的执行设备310。在本申请实施例中,该执行设备可以是车辆上的车载终端。执行设备310可以执行本申请实施例中数据处理方法,例如,该数据处理方法可以包括图像处理方法等。在附图3a中,执行设备310配置有I/O接口312,用于与外部设备进行数据交互,用户可以通过客户设备340向I/O接口312输入数据,所述输入数据在本申请实施例中可以包括:待识别的图像、视频。
具体地,具体地,执行设备310运行上述训练好的车道线检测模型的实现过程可以包括如下步骤:
步骤S21、获取待识别的车道线图像;
步骤S22、通过车道线检测模型对上述待识别的车道线图像进行处理,识别出所述车道线图像中的车道线。
在执行设备310的计算模块311执行计算等相关的处理过程中,执行设备310可以调用数据存储系统370中的数据、代码等以用于相应的处理,也可以将相应处理得到的数据、指令等存入数据存储系统370中。
值得注意的是,附图3a仅是本申请实施例提供的一种系统架构的示意图,图中所示设备、器件、模块等之间的位置关系不构成任何限制,例如,在附图3a中,数据存储系统350相对执行设备310是外部存储器,在其它情况下,也可以将数据存储系统350置于执行设备310中。
下面详细描述本申请实施例涉及的方法。图4a为本申请实施例提供的一种车道线检测方法,该方法可以包括但不限于如下步骤:
步骤S401、获取待识别的车道线图像。
具体地,可以通过摄像头获取待识别的车道线图像。示例性地,该摄像头可以设置在车辆的前端,用于拍摄车辆在行驶过程中的车道线图像。在本申请中,车道线图像是指包含有车道线的图像。进一步地,车道线图像可以是单个图像,也可以是从一段视频中提取的一个视频帧等。
在一个示例中,在摄像头拍摄到车道线图像之后,可以将上述车道线图像发送至车载终端,以使车载终端对车道线图像进行处理,得到车道线区域。
在一个示例中,在摄像头拍摄到车道线图像之后,可以将上述车道线图像发送至服务器,以使服务器对车道线图像进行处理,得到车道线区域。在服务器识别出车道线区域之后,将车道线区域发送至车载终端,从而车载终端可以结合车辆上的自动驾驶系统实现车辆的自动驾驶功能。
步骤S402、基于所述车道线图像,确定用于识别车道线区域的候选像素点,得到候选点集;所述车道线区域为所述车道线图像中车道线所在位置的区域以及车道线所在位置的周边区域。
在本申请实施例中,候选像素点是指有极大可能落入车道线区域的像素点。
在本申请实施例中,基于车道线图像,确定用于识别车道线区域的候选像素点,得到候选点集的实现过程可以包括:首先,生成车道线图像的置信度图谱;其中,置信度图谱包含车道线图像中各个像素点的置信度值;该置信度值用于表征车道线图像中各个像素点属于车道线区域的可信程度。例如,对第K个像素,可以通过公式(1)计算其属于车道线区域的概率值:
Figure PCTCN2020114289-appb-000001
其中,f k(W,I)表示第K个像素的输出。
上述通过公式(1)计算得到的概率值的范围在0~1之间。可以理解的是,该像素点对应的概率值越大,表示其属于车道线区域的可信程度越高。当像素点位于车道线中心位置(该中心位置为车道线几何形状的中心)时,该概率值为1;对该像素点邻域内的其他像素点来说,距离车道线中心位置的距离越大,该概率值越小。
例如,如图4b所示,以车道线1为例,图中虚线框所示为车道线区域,该车道线区域可以包括车道线所在位置的区域,还可以包括车道线所在位置的周边区域。具体地,周边区域可以为距离车道线中心位置点预设范围的区域。以车道线2为例,候选像素点为属于车道线区域上的像素点(图中黑色圆圈表示),非候选像素点为不属于车道线区域上的像素点(图中白色圆圈表示)。
步骤S403、在候选点集中选择目标像素点,并获取目标像素点在邻域内关联的至少三个位置点;至少三个位置点在同一条车道线上。
图4c示出了点a的δ邻域:设δ是一个正数,则开区间(a-δ,a+δ)称为点a的δ邻域,记作U=(a,δ)={x|a-δ<x<a+δ},点a称为这个邻域的中心,δ称为这个邻域的半径。在本申请实施例中,δ的取值一般比较小,例如,δ=0.5mm,又例如,δ=0.5cm。
在本申请实施例中,如图4d所示,目标像素点在邻域内关联的至少三个位置点为距离目标像素点最近的车道线(图4d中的车道线2)上的位置点。具体地,目标像素点所在位置代表了存在车道线,三个位置点用于表征目标像素点邻域内车道线的局部结构。
在一些实施例中,在候选点集中选择目标像素点的实现过程可以包括:首先,根据置信度图谱,在车道线图像中选择目标像素行;目标像素行为第一像素点所在邻域内目标置信度极大值数量最多的一个像素行;第一像素点为大于第二置信度阈值的像素点;目标置信度极大值大于第二置信度阈值(例如,第二置信度值为0.5);置信度图谱包含车道线图像中各个像素点的置信度值;置信度值用于表征车道线图像中各个像素点属于车道线区域的可信程度;在目标像素行中,选择大于第二置信度值的所有像素点为目标像素点,或,选择大于第二置信度值的像素点邻域内置信度极大值的像素点作为目标像素点。由于获取的目标像素点更有可能表示了车道线的中心位置,为后续获取目标像素点对应的车道线点 集提供了有效保障。例如,如图4e所示,该图像中包含三条车道线,分别为车道线1、车道线2和车道线3,其中,黑色点为目标置信度极大值点,选择目标置信度极大值点最多的一个像素行(例如,图4e中的像素行3)作为目标像素行。
在实际应用中,可以通过卷积神经网络中的最大池化操作来获取上述目标像素点。
在一些实施例中,在候选点集中选择目标像素点的实现过程可以包括:首先,根据置信度图谱,在车道线图像中选择目标像素行;目标像素行为第二像素点的数量大于目标数值(例如,目标数值为车道线条数的1/2)的多个像素行;第二像素点为大于第二置信度阈值(例如,第二置信度值为0.5))的像素点;置信度图谱包含车道线图像中各个像素点的置信度值;置信度值用于表征车道线图像中各个像素点属于车道线区域的可信程度;然后,在目标像素行中,选择大于第二置信度值的所有像素点为目标像素点。例如,如图4f所示,该图像中包含三条车道线,分别为车道线1、车道线2和车道线3,其中,黑色点为大于第二置信度阈值的像素点(例如,图4f中,像素点1对应的置信度值为1;像素点2对应的置信度值为0.7),选择大于第二置信度值的所有像素点为目标像素点。
总的来说,本申请描述的确定目标像素点的方法,可以保证确定的多个目标像素点尽可能地覆盖车道线图像中包含的所有车道线,避免出现漏检。
步骤S406、以目标像素点为起点,根据目标像素点关联的至少三个位置点进行扩展,获取目标像素点对应的车道线点集。
在本申请实施例中,一个车道线点集表示一条完整的车道线。可以理解的是,车道线点集的数量与车道线的数量是相同的。
在本申请实施例中,如图4g所示,上述三个位置点包括第一位置点、第二位置点和第三位置点,目标像素点所在像素行的上N(例如,N=1)行为第一位置点所在的像素行;目标像素点所在像素行与第二位置点所在像素行相同;目标像素点所在像素行的下M(例如,M=1)行为第三位置点所在的像素行,以目标像素点为起点,根据目标像素点关联的至少三个位置点进行扩展的实现过程可以包括:
首先,根据第一偏移量对所述目标像素点的起始位置进行调整,得到所述目标像素点的最终位置;其中,所述第一偏移量为所述目标像素点到所述第二位置点的偏移量;例如,该目标像素点为向上扩展得到的像素点时,目标像素点的最终位置可以表示为:
Figure PCTCN2020114289-appb-000002
其中,
Figure PCTCN2020114289-appb-000003
表示取扩展得到的像素点
Figure PCTCN2020114289-appb-000004
的横坐标,
Figure PCTCN2020114289-appb-000005
表示取扩展得到的像素点
Figure PCTCN2020114289-appb-000006
的纵坐标,Δx same表示第一偏移量。例如,该目标像素点为向下扩展得到的像素点时
Figure PCTCN2020114289-appb-000007
其中,
Figure PCTCN2020114289-appb-000008
表示取扩展得到的像素点
Figure PCTCN2020114289-appb-000009
的横坐标,
Figure PCTCN2020114289-appb-000010
表示取扩展得到的像素点
Figure PCTCN2020114289-appb-000011
的纵坐标,Δx same表示第一偏移量。通过这一实现方式,可以实现对车道线精确位置的逐步调优,由位于车道线中心位置的像素点扩展出其他车道线点,可以提高检测车道线的精度。
然后,以所述目标像素点的最终位置为起点,获取在所述目标像素点上根据第二偏移量扩展得到的第一车道线点,其中,所述第二偏移量为所述目标像素点到所述第一位置点的偏移量。
例如,第一车道线点可以表示为
Figure PCTCN2020114289-appb-000012
其中,(P k)为像素 点,x(P k)表示取像素点P k的横坐标,y(P k)表示取像素点P k的纵坐标,N表示目标像素点所在像素行与第一位置点所在像素行之间的像素行的数量。
与此同时,还获取在所述目标像素点上根据第三偏移量扩展得到的第二车道线点;所述第三偏移量为所述目标像素点到所述第三位置点的偏移量。例如,第二车道线点可以表示为
Figure PCTCN2020114289-appb-000013
其中,(P k)为像素点,x(P k)表示取像素点P k的横坐标,y(P k)表示取像素点P k的纵坐标,M表示目标像素点所在像素行与第三位置点所在像素行之间的像素行的数量。
在第一车道线点对应的置信度值大于第一置信度值的情况下,以第一车道线点为当前目标像素点,执行获取在目标像素点上根据第一偏移量和第二偏移量扩展得到的第一车道线点的步骤,直至扩展得到的第一车道线点对应的置信度值不大于第一置信度值(例如,第一置信度值为0.8)。在第二车道线点对应的置信度值大于第一置信度值的情况下,以第二车道线点为当前目标像素点,执行获取在目标像素点上根据第一偏移量和第三偏移量扩展得到的第二车道线点的步骤,直至扩展得到的第二车道线点对应的置信度值不大于第一置信度值。例如,如图4h所示,采用上述方法进行扩展,可以得到每个像素点对应的车道线点集。通过这一实现方式,可以保证扩展得到的车道线点均为属于车道线区域的像素点,避免根据无效车道线点进行扩展,从而可以快速、准确地识别出车道线。
在一些实施例中,至少三个位置点包括第四位置点和第五位置点;目标像素点所在像素行的上N行为第四位置点所在的像素行;目标像素点所在像素行的下M行为第五位置点所在的像素行;M、N为大于0的整数;以目标像素点为起点,根据目标像素点关联的至少三个位置点进行扩展,包括:以目标像素点的起始位置为起点,获取在目标像素点上根据第四偏移量扩展得到的第三车道线点,其中,第四偏移量为目标像素点到第四位置点的偏移量。
例如,第三车道线点可以表示为
Figure PCTCN2020114289-appb-000014
其中,(P k)为像素点,x(P k)表示取像素点P k的横坐标,y(P k)表示取像素点P k的纵坐标,N表示目标像素点所在像素行与第一位置点所在像素行之间的像素行的数量。
与此同时,还获取在目标像素点上根据第五偏移量扩展得到的第四车道线点,其中,第五偏移量为目标像素点到第五位置点的偏移量。
例如,第四车道线点可以表示为
Figure PCTCN2020114289-appb-000015
其中,(P k)为像素点,x(P k)表示取像素点P k的横坐标,y(P k)表示取像素点P k的纵坐标,M表示目标像素点所在像素行与第三位置点所在像素行之间的像素行的数量。
在第三车道线点对应的置信度值大于第一置信度值的情况下,以第三车道线点为当前目标像素点,执行获取在目标像素点上根据第四偏移量扩展得到的第三车道线点的步骤,直至扩展得到的第三车道线点对应的置信度值不大于第一置信度值(例如,第一置信度值为0.8);在第四车道线点对应的置信度值大于第一置信度值的情况下,以第四车道线点为当前目标像素点,执行获取在目标像素点上根据第五偏移量扩展得到的第四车道线点的步骤,直至扩展得到的第四车道线点对应的置信度值不大于第一置信度值。通过这一实现方式,可以保证扩展得到的车道线点均为属于车道线区域的像素点,从而可以快速、准确地 识别出车道线。
需要说明的是,在上述描述中,以扩展得到的实例点为第二车道线点为例进行阐述,“第二车道线点对应的置信度值大于第一置信度值”表示:扩展得到的第二车道线点为有效扩展实例点。此时,可以以该有效扩展实例点为起点,继续进行扩展,以找到更多的有效扩展实例点。“第二车道线点对应的置信度值不大于第一置信度值(例如,第二车道线点对应的置信度值小于第一置信度值,或,第二车道线点对应的置信度值等于第一置信度值)”表示:扩展得到的第二车道线点为无效扩展实例点。此时,截止扩展。需要说明的是,当扩展得到的实例点超出车道线图像的范围时(也即:扩展得到的实例点不属于车道线图像),可以认为该扩展得到的实例点为无效扩展实例点。此时,截止扩展。
在一些实施例中,上述三个位置点包括第六位置点、第七位置点和第八位置点,目标像素点所在像素行的上N(例如,N=1)行为第六位置点所在的像素行;目标像素点所在像素行与第七位置点所在像素行相同;目标像素点所在像素行的下M(例如,M=1)行为第八位置点所在的像素行,以目标像素点为起点,根据目标像素点关联的至少三个位置点进行扩展的实现过程可以包括:以目标像素点为起点,根据第六偏移量对目标像素点的起始位置进行调整,得到第六车道线点;第六偏移量为目标像素点到第二位置点的偏移量;根据第七偏移量获取与第六位置点距离最近的一个候选像素点,得到第七车道线点,并获取在第七车道线点上根据第六偏移量扩展得到的第八车道线点;根据第八偏移量获取与第八位置点距离最近的一个候选像素点,得到第九车道线点,并获取在第九车道线点上根据第六偏移量扩展得到的第十车道线点;第六车道线点、第八车道线点以及第十车道线点用于构成目标像素点对应的车道线点集。
例如,如图4i所示,在第二个像素行中,以目标像素点为起点,根据第六偏移量在同一个像素行中对目标像素点的起始位置进行调整,并获取该位置上的像素点(例如,如图4i所示的第六车道线点)。然后,在上N行中,根据第七偏移量获取与第六位置点距离最近的一个候选像素点(例如,如图4i所示的第七车道线点),在该候选像素点上,根据第六偏移量扩展得到第八车道线点。在下M行中,根据第八偏移量获取与第八位置点距离最近的一个候选像素点(例如,如图4i所示的第九车道线点),在该候选像素点上,根据第六偏移量扩展得到第十车道线点,从而可以得到该目标像素点对应的车道线点集。可以理解的是,在目标像素点的数量为多个的情况下,可以采用上述方法通过并行的方式获取每个像素点各自对应的车道线点集。
在一些实施例中,上述三个位置点包括第九位置点和第十位置点,目标像素点所在像素行的上N(例如,N=1)行为第九位置点所在的像素行;目标像素点所在像素行的下M(例如,M=1)行为第十位置点所在的像素行,以目标像素点为起点,根据目标像素点关联的至少三个位置点进行扩展的实现过程可以包括:以目标像素点为起点,根据第九偏移量获取与第九位置点距离最近的一个候选像素点,得到第十一车道线点;根据第十偏移量获取与第十位置点距离最近的一个候选像素点,得到第十二车道线点。
例如,如图4j所示,在第二个像素行中,以目标像素点为起点,根据第九偏移量获取与第九位置点距离最近的一个候选像素点(例如,如图4j所示的第十一车道线点);根据第十偏移量获取与第十位置点距离最近的一个候选像素点(例如,如图4j所示的第十二车 道线点),从而可以得到该目标像素点对应的车道线点集。可以理解的是,在目标像素点的数量为多个的情况下,可以采用上述方法通过并行的方式获取每个像素点各自对应的车道线点集。
还需要说明的是,在本申请实施例中,M、N可以取相同的数值,也可以取不同的数值。例如,M=N=1;又例如,M=N=2等等。如图4k所示,一条虚线表示一个像素行,相邻像素行之间可以间隔10个像素,也可以间隔3个像素,还可以间隔某个数值的像素,某个数值为以设定的数值为基准,在该设定的数值的上下波动范围(例如,50%)内的一个数值。
总的来说,本申请上述描述的方法,不仅可以快速、准确地检测出距离车辆当前位置较近的车道线,还可以快速、准确地检测出距离车道当前距离较远的车道线,也即可以很好的体现出车道线的延伸性,为自动驾驶提供了基础,进而可以保证自动驾驶过程中的安全性。
实施本申请实施例,由于车道线检测设备可以以目标像素点为起点,根据目标像素点关联的至少三个位置获取目标像素点扩展得到的车道线其他实例点,之后,获取车道线点集。通过这一实现方式,可以快速、准确地识别出车道线图像中的车道线。
考虑到当目标像素点来自多个像素行时,采用上述方法得到车道线点集合(由每个目标像素点对应的车道线点集构成)中可能包含重复的车道线点,如图5a所示,在图4a所示方法的基础上,该方法实施例阐述了如何去除重复的车道线点,可以包括但不限于如下步骤:
步骤S408、获取两两车道线点集之间的重叠度。
在一个示例中,在获取到每个像素点对应的车道线点集之后,在多个车道线点集中,计算两两车道线点集之间的重叠度。
在一个示例中,在获取到每个像素点对应的车道线点集之后,可以根据各个车道线点集中所有像素点对应的置信度值确定每个车道线点集的评分,然后,根据评分对上述车道线点集进行排序,例如,可以按照评分从大到小进行排序,也可以按照评分从小到大进行排序;之后,在评分大于设定的阈值(例如,设定的阈值可以为对置信度图谱中所有像素点对应的置信度值进行取平均得到)的车道线点集中,计算两两车道线点集之间的重叠度。
在本申请实施例中,定义两个车道线点集的重叠度IOU如下:
IOU(Inst 1,Inst 2)={Inst 1ΩInst 2}/{[Inst 1]+[Inst 2]-[Inst 1ΩInst 2]}
其中,Inst 1表示车道线点1,Inst 2表示车道线点2。当在这两个车道线点集中,分别存在两个车道线点,其所在像素行的行数即纵坐标(y)相同,且横坐标(x)的差值小于设定的阈值,此时,可以认为这两条车道线在这个像素行中重合。
具体地,可以获取两两车道线点集中重合的车道线点的数量,然后,根据重合的车道线点的数量确定两两车道线点集之间的重叠度。
步骤S4010、判断两两车道线点集之间的重叠度是否大于目标阈值,若是,则执行步骤S4012。
在本申请实施例中,对目标阈值的取值不作具体限定。示例性地,目标阈值可以为车道线检测设备自主设置的,也可以为车道线检测设备根据用户的需求设置的。例如,目标阈值为0.85。
步骤S4012、删除两两车道线点集中的任一车道线点集。
例如,车道线图像如图5b所示,该车道线图像包括三条车道线,分别为车道线1、车道线2和车道线3。由于确定的目标像素点来自多个像素行,每个目标像素点对应的车道线点集可以如图5c所示,在图5c中,可以看出车道线2的识别结果有两条,分别为车道线21和车道线22,在这种情况下,删除两两车道线点集中的任一车道线点集。此时,车道线图像的识别结果可以如图5d所示。
实施本申请实施例,车道线检测设备在确定两两车道线点集之间的重叠度大于目标阈值的情况下,删除两两车道线点集中的任一车道线点集,可以保证车道线识别结果的准确性,避免出现误检的情形。
需要说明的是,在本申请实施例中,在获取到每个目标像素点对应的车道线点集之后,如图5e所示,可以在车辆的中控屏501上显示针对车道线图像的识别结果,以便自动驾驶装置或驾驶员根据上述识别结果进行驾驶。
上文图1a-图5d详细描述了本申请实施例涉及的车道线检测方法,下面结合附图介绍本申请实施例涉及的装置。
图6为本申请实施例中一种车道线检测装置60的结构示意图。图6所示的车道线检测装置60可以包括:
获取图像单元600,用于获取待识别的车道线图像;
确定候选像素点单元602,用于基于车道线图像,确定用于识别车道线区域的候选像素点,得到候选点集;车道线区域为车道线图像中车道线所在位置的区域以及车道线所在位置的周边区域;
获取位置单元604,用于在候选点集中选择目标像素点,并获取目标像素点在邻域内关联的至少三个位置点;至少三个位置点在同一条车道线上;
扩展单元606,用于以目标像素点为起点,根据目标像素点关联的至少三个位置点进行扩展,获取目标像素点对应的车道线点集。
在一种可能的实现方式中,所述确定候选像素点单元602,具体用于:
生成所述车道线图像的置信度图谱;其中,所述置信度图谱包含所述车道线图像中各个像素点的置信度值;所述置信度值用于表征所述车道线图像中各个像素点属于所述车道线区域的可信程度;
在所述置信度图谱中,确定用于识别所述车道线区域的候选像素点。
在一种可能的实现方式中,所述至少三个位置点包括第一位置点、第二位置点和第三位置点;所述目标像素点所在像素行的上N行为所述第一位置点所在的像素行;所述目标像素点所在像素行与所述第二位置点所在像素行相同;所述目标像素点所在像素行的下M行为所述第三位置点所在的像素行;所述M、N为大于0的整数;所述扩展单元606,具体用于:
根据第一偏移量对所述目标像素点的起始位置进行调整,得到所述目标像素点的最终位置;所述第一偏移量为所述目标像素点到所述第二位置点的偏移量;
以所述目标像素点的最终位置为起点,获取在所述目标像素点上根据第二偏移量扩展得到的第一车道线点,并获取在所述目标像素点上根据第三偏移量扩展得到的第二车道线点;所述第二偏移量为所述目标像素点到所述第一位置点的偏移量;所述第三偏移量为所述目标像素点到所述第三位置点的偏移量;
在所述第一车道线点对应的置信度值大于第一置信度值的情况下,以所述第一车道线点为当前目标像素点,执行获取在所述目标像素点上根据所述第一偏移量和所述第二偏移量扩展得到的第一车道线点的步骤,直至扩展得到的第一车道线点对应的置信度值不大于所述第一置信度值;
在所述第二车道线点对应的置信度值大于第一置信度值的情况下,以所述第二车道线点为当前目标像素点,执行获取在所述目标像素点上根据所述第一偏移量和所述第三偏移量扩展得到的第二车道线点的步骤,直至扩展得到的第二车道线点对应的置信度值不大于所述第一置信度值。
在一种可能的实现方式中,所述获取位置单元604包括像素点选择单元,其中,
所述像素点选择单元,用于根据置信度图谱,在所述车道线图像中选择目标像素行;所述目标像素行为第一像素点所在邻域内目标置信度极大值数量最多的一个像素行;所述第一像素点为大于第二置信度阈值的像素点;所述目标置信度极大值大于所述第二置信度阈值;所述置信度图谱包含所述车道线图像中各个像素点的置信度值;所述置信度值用于表征所述车道线图像中各个像素点属于所述车道线区域的可信程度;
在所述目标像素行中,选择大于所述第二置信度值的所有像素点为所述目标像素点,或,选择大于所述第二置信度值的像素点邻域内置信度极大值的像素点作为所述目标像素点。
在一种可能的实现方式中,所述获取位置单元604包括像素点选择单元,其中,
所述像素点选择单元,用于根据置信度图谱,在所述车道线图像中选择目标像素行;所述目标像素行为第二像素点的数量大于目标数值的多个像素行;所述第二像素点为大于第二置信度阈值的像素点;所述置信度图谱包含所述车道线图像中各个像素点的置信度值;所述置信度值用于表征所述车道线图像中各个像素点属于所述车道线区域的可信程度;
在所述目标像素行中,选择大于所述第二置信度值的所有像素点为所述目标像素点。
在一种可能的实现方式中,在所述目标像素点来自多个像素行的情况下,所述装置60还包括:去重单元608,用于:获取两两车道线点集之间的重叠度;若两两车道线点集之间的重叠度大于目标阈值,删除所述两两车道线点集中的任一车道线点集。
本申请实施例中,各个的单元的具体实现可以参见上述实施例中的相关描述,此处不再赘述。
实施本申请实施例,由于车道线检测设备可以以目标像素点为起点,根据目标像素点关联的至少三个位置获取目标像素点扩展得到的车道线其他实例点,之后,获取车道线点集。通过这一实现方式,可以快速、准确地识别出车道线图像中的车道线。
如图7所示,本申请实施例提供的一种车道线检测设备70,该车道线检测设备可以包括处理器701、存储器702、通信总线703和通信接口704,处理器701通过通信总线连接存储器702和通信接口703。
处理器701可以采用通用的中央处理器(Central Processing Unit,CPU),微处理器,应用专用集成电路(Application Specific Integrated Circuit,ASIC),图形处理器(Graphics Processing Unit,GPU)、神经网络处理器(Network Processing Unit,NPU)或者一个或多个集成电路,用于执行相关程序,以执行本申请方法实施例的所描述的车道线检测方法。
处理器701还可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,本申请的车道线检测方法的各个步骤可以通过处理器701中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器701还可以是通用处理器、数字信号处理器(Digital Signal Processing,DSP)、专用集成电路(ASIC)、现成可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器701,处理器701读取存储器702中的信息,结合其硬件执行本申请方法实施例的车道线检测方法。
存储器702可以是只读存储器(Read Only Memory,ROM),静态存储设备,动态存储设备或者随机存取存储器(Random Access Memory,RAM)。存储器702可以存储程序和数据,例如本申请实施例中车道线检测方法的程序等。当存储器701中存储的程序被处理器702执行时,处理器701和通信接口704用于执行本申请实施例的车道线检测方法的各个步骤。
例如,本申请实施例中用于实现本申请实施例中车道线检测方法的程序等。
通信接口704使用例如但不限于收发器一类的收发装置,来实现车道线检测设备700与其他设备或通信网络之间的通信。例如,可以通过通信接口704获取训练好的神经网络,以实现与执行设备、客户设备、用户设备或者终端设备等的信息交互。
可选地,该车道线检测设备还可以包括人工智能处理器705,人工智能处理器705可以是神经网络处理器(Network Processing Unit,NPU),张量处理器(Tensor Processing Unit,TPU),或者图形处理器(Graphics Processing Unit,GPU)等一切适合用于大规模异或运算处理的处理器。人工智能处理器705可以作为协处理器挂载到主CPU(Host CPU)上,由主CPU为其分配任务。人工智能处理器705可以实现上述车道线检测方法中涉及的一种或多种运算。例如,以NPU为例,NPU的核心部分为运算电路,通过控制器控制运算电路提取存储器702中的矩阵数据并进行乘加运算。
处理器701用于调用存储器中的数据和程序代码,执行:
获取待识别的车道线图像;
基于所述车道线图像,确定用于识别车道线区域的候选像素点,得到候选点集;所述 车道线区域为所述车道线图像中包含车道线所在位置的区域以及所述车道线所在位置的周边区域;
在所述候选点集中选择目标像素点,并获取所述目标像素点在邻域内关联的至少三个位置点;所述至少三个位置点在同一条车道线上;
以所述目标像素点为起点,根据所述目标像素点关联的至少三个位置点进行扩展,获取所述目标像素点对应的车道线点集。
其中,所述处理器701基于所述车道线图像,确定用于识别车道线区域的候选像素点,得到候选点集,包括:
生成所述车道线图像的置信度图谱;其中,所述置信度图谱包含所述车道线图像中各个像素点的置信度值;所述置信度值用于表征所述车道线图像中各个像素点属于所述车道线区域的可信程度;
在所述置信度图谱中,确定用于识别所述车道线区域的候选像素点。
其中,所述至少三个位置点包括第一位置点、第二位置点和第三位置点;所述目标像素点所在像素行的上N行为所述第一位置点所在的像素行;所述目标像素点所在像素行与所述第二位置点所在像素行相同;所述目标像素点所在像素行的下M行为所述第三位置点所在的像素行;所述M、N为大于0的整数;所述处理器701以所述目标像素点为起点,根据所述目标像素点关联的至少三个位置点进行扩展,包括:
根据第一偏移量对所述目标像素点的起始位置进行调整,得到所述目标像素点的最终位置;所述第一偏移量为所述目标像素点到所述第二位置点的偏移量;
以所述目标像素点的最终位置为起点,获取在所述目标像素点上根据第二偏移量扩展得到的第一车道线点,并获取在所述目标像素点上根据第三偏移量扩展得到的第二车道线点;所述第二偏移量为所述目标像素点到所述第一位置点的偏移量;所述第三偏移量为所述目标像素点到所述第三位置点的偏移量;
在所述第一车道线点对应的置信度值大于第一置信度值的情况下,以所述第一车道线点为当前目标像素点,执行获取在所述目标像素点上根据所述第一偏移量和所述第二偏移量扩展得到的第一车道线点的步骤,直至扩展得到的第一车道线点对应的置信度值不大于所述第一置信度值;
在所述第二车道线点对应的置信度值大于第一置信度值的情况下,以所述第二车道线点为当前目标像素点,执行获取在所述目标像素点上根据所述第一偏移量和所述第三偏移量扩展得到的第二车道线点的步骤,直至扩展得到的第二车道线点对应的置信度值不大于所述第一置信度值。
其中,所述至少三个位置点包括第四位置点和第五位置点;所述目标像素点所在像素行的上N行为所述第四位置点所在的像素行;所述目标像素点所在像素行的下M行为所述第五位置点所在的像素行;所述M、N为大于0的整数;所述处理器701以所述目标像素点为起点,根据所述目标像素点关联的至少三个位置点进行扩展,包括:
以所述目标像素点的起始位置为起点,获取在所述目标像素点上根据第四偏移量扩展得到的第三车道线点,并获取在所述目标像素点上根据第五偏移量扩展得到的第四车道线点;所述第四偏移量为所述目标像素点到所述第四位置点的偏移量;所述第五偏移量为所 述目标像素点到所述第五位置点的偏移量;
在所述第三车道线点对应的置信度值大于第一置信度值的情况下,以所述第三车道线点为当前目标像素点,执行获取在所述目标像素点上根据所述第四偏移量扩展得到的第三车道线点的步骤,直至扩展得到的第三车道线点对应的置信度值不大于所述第一置信度值;
在所述第四车道线点对应的置信度值大于第一置信度值的情况下,以所述第四车道线点为当前目标像素点,执行获取在所述目标像素点上根据所述第五偏移量扩展得到的第四车道线点的步骤,直至扩展得到的第四车道线点对应的置信度值不大于所述第一置信度值。
其中,所述至少三个位置点包括第六位置点、第七位置点和第八位置点;所述目标像素点所在像素行的上N行为所述第六位置点所在的像素行;所述目标像素点所在像素行与所述第七位置点所在像素行相同;所述目标像素点所在像素行的下M行为所述第八位置点所在的像素行;所述M、N为大于0的整数;所述处理器701以所述目标像素点为起点,根据所述目标像素点关联的至少三个位置点进行扩展,包括:
以所述目标像素点为起点,根据第六偏移量对所述目标像素点的起始位置进行调整,得到第六车道线点;第六偏移量为目标像素点到第七位置点的偏移量;
根据第七偏移量获取与所述第六位置点距离最近的一个候选像素点,得到第七车道线点,并获取在所述第七车道线点上根据所述第六偏移量扩展得到的第八车道线点;
根据第八偏移量获取与所述第八位置点距离最近的一个候选像素点,得到第九车道线点,并获取在第九车道线点上根据所述第六偏移量扩展得到的第十车道线点;所述第六车道线点、所述第八车道线点以及所述第十车道线点用于构成所述目标像素点对应的车道线点集。
其中,所述处理器701在所述候选点集中选择目标像素点,包括:
根据置信度图谱,在所述车道线图像中选择目标像素行;所述目标像素行为第一像素点所在邻域内目标置信度极大值数量最多的一个像素行;所述第一像素点为大于第二置信度阈值的像素点;所述目标置信度极大值大于所述第二置信度阈值;所述置信度图谱包含所述车道线图像中各个像素点的置信度值;所述置信度值用于表征所述车道线图像中各个像素点属于所述车道线区域的可信程度;
在所述目标像素行中,选择大于所述第二置信度值的所有像素点为所述目标像素点,或,选择大于所述第二置信度值的像素点邻域内置信度极大值的像素点作为所述目标像素点。
其中,所述处理器701在所述候选点集中选择目标像素点,包括:
根据置信度图谱,在所述车道线图像中选择目标像素行;所述目标像素行为第二像素点的数量大于目标数值的多个像素行;所述第二像素点为大于第二置信度阈值的像素点;所述置信度图谱包含所述车道线图像中各个像素点的置信度值;所述置信度值用于表征所述车道线图像中各个像素点属于所述车道线区域的可信程度;
在所述目标像素行中,选择大于所述第二置信度值的所有像素点为所述目标像素点。
其中,在所述目标像素点来自多个像素行的情况下,所述处理器701还可以用于:
获取两两车道线点集之间的重叠度;
若两两车道线点集之间的重叠度大于目标阈值,删除所述两两车道线点集中的任一车 道线点集。
应理解,各个器件的实现还可以对应参照上述车道线检测方法实施例中的相应描述,本申请实施例不再赘述。
本申请实施例还提供了一种计算机可读存储介质,其中,该计算机可读存储介质用于基于Karatsuba算法实现两个高位数卷积的计算机程序,该计算机程序使得电子设备执行如上述方法实施例中记载的任何一种卷积运算方法的部分或者全部步骤。
本申请实施例还提供一种计算机程序产品,该计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序可操作来使电子设备执行如上述方法实施例中记载的任何一种卷积运算方法的部分或者全部步骤。
可以理解,本领域普通技术人员可以意识到,结合本申请各个实施例中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本领域技术人员能够领会,结合本申请各个实施例中公开描述的各种说明性逻辑框、模块和算法步骤所描述的功能可以硬件、软件、固件或其任何组合来实施。如果以软件来实施,那么各种说明性逻辑框、模块、和步骤描述的功能可作为一或多个指令或代码在计算机可读媒体上存储或传输,且由基于硬件的处理单元执行。计算机可读媒体可包含计算机可读存储媒体,其对应于有形媒体,例如数据存储媒体,或包括任何促进将计算机程序从一处传送到另一处的媒体(例如,根据通信协议)的通信媒体。以此方式,计算机可读媒体大体上可对应于(1)非暂时性的有形计算机可读存储媒体,或(2)通信媒体,例如信号或载波。数据存储媒体可为可由一或多个计算机或一或多个处理器存取以检索用于实施本申请中描述的技术的指令、代码和/或数据结构的任何可用媒体。计算机程序产品可包含计算机可读媒体。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储 在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (15)

  1. 一种车道线检测方法,其特征在于,包括:
    获取待识别的车道线图像;
    基于所述车道线图像,确定用于识别车道线区域的候选像素点,得到候选点集;所述车道线区域为所述车道线图像中车道线所在位置的区域以及所述车道线所在位置的周边区域;
    在所述候选点集中选择目标像素点,并获取所述目标像素点在邻域内关联的至少三个位置点;所述至少三个位置点在同一条车道线上;
    以所述目标像素点为起点,根据所述目标像素点关联的至少三个位置点进行扩展,获取所述目标像素点对应的车道线点集。
  2. 如权利要求1所述的方法,其特征在于,所述基于所述车道线图像,确定用于识别车道线区域的候选像素点,得到候选点集,包括:
    生成所述车道线图像的置信度图谱;其中,所述置信度图谱包含所述车道线图像中各个像素点的置信度值;所述置信度值用于表征所述车道线图像中各个像素点属于所述车道线区域的可信程度;
    在所述置信度图谱中,确定用于识别所述车道线区域的候选像素点。
  3. 如权利要求1所述的方法,其特征在于,所述至少三个位置点包括第一位置点、第二位置点和第三位置点;所述目标像素点所在像素行的上N行为所述第一位置点所在的像素行;所述目标像素点所在像素行与所述第二位置点所在像素行相同;所述目标像素点所在像素行的下M行为所述第三位置点所在的像素行;所述M、N为大于0的整数;所述以所述目标像素点为起点,根据所述目标像素点关联的至少三个位置点进行扩展,包括:
    根据第一偏移量对所述目标像素点的起始位置进行调整,得到所述目标像素点的最终位置;所述第一偏移量为所述目标像素点到所述第二位置点的偏移量;
    以所述目标像素点的最终位置为起点,获取在所述目标像素点上根据第二偏移量扩展得到的第一车道线点,并获取在所述目标像素点上根据第三偏移量扩展得到的第二车道线点;所述第二偏移量为所述目标像素点到所述第一位置点的偏移量;所述第三偏移量为所述目标像素点到所述第三位置点的偏移量;
    在所述第一车道线点对应的置信度值大于第一置信度值的情况下,以所述第一车道线点为当前目标像素点,执行获取在所述目标像素点上根据所述第一偏移量和所述第二偏移量扩展得到的第一车道线点的步骤,直至扩展得到的第一车道线点对应的置信度值不大于所述第一置信度值;
    在所述第二车道线点对应的置信度值大于第一置信度值的情况下,以所述第二车道线点为当前目标像素点,执行获取在所述目标像素点上根据所述第一偏移量和所述第三偏移量扩展得到的第二车道线点的步骤,直至扩展得到的第二车道线点对应的置信度值不大于所述第一置信度值。
  4. 如权利要求1-3任一项所述的方法,其特征在于,所述在所述候选点集中选择目标像素点,包括:
    根据置信度图谱,在所述车道线图像中选择目标像素行;所述目标像素行为第一像素点所在邻域内目标置信度极大值数量最多的一个像素行;所述第一像素点为大于第二置信度阈值的像素点;所述目标置信度极大值大于所述第二置信度阈值;所述置信度图谱包含所述车道线图像中各个像素点的置信度值;所述置信度值用于表征所述车道线图像中各个像素点属于所述车道线区域的可信程度;
    在所述目标像素行中,选择大于所述第二置信度值的所有像素点为所述目标像素点,或,选择大于所述第二置信度值的像素点邻域内置信度极大值的像素点作为所述目标像素点。
  5. 如权利要求1-3任一项所述的方法,其特征在于,所述在所述候选点集中选择目标像素点,包括:
    根据置信度图谱,在所述车道线图像中选择目标像素行;所述目标像素行为第二像素点的数量大于目标数值的多个像素行;所述第二像素点为大于第二置信度阈值的像素点;所述置信度图谱包含所述车道线图像中各个像素点的置信度值;所述置信度值用于表征所述车道线图像中各个像素点属于所述车道线区域的可信程度;
    在所述目标像素行中,选择大于所述第二置信度值的所有像素点为所述目标像素点。
  6. 如权利要求1所述的方法,其特征在于,在所述目标像素点来自多个像素行的情况下,所述方法还包括:
    获取两两车道线点集之间的重叠度;
    若两两车道线点集之间的重叠度大于目标阈值,删除所述两两车道线点集中的任一车道线点集。
  7. 一种车道线检测装置,其特征在于,包括:
    获取图像单元,用于获取待识别的车道线图像;
    确定候选像素点单元,用于基于所述车道线图像,确定用于识别车道线区域的候选像素点,得到候选点集;所述车道线区域为所述车道线图像中包含车道线所在位置的区域以及所述车道线所在位置的周边区域;
    获取位置单元,用于在所述候选点集中选择目标像素点,并获取所述目标像素点在邻域内关联的至少三个位置点;所述至少三个位置点在同一条车道线上;
    扩展单元,用于以所述目标像素点为起点,根据所述目标像素点关联的至少三个位置点进行扩展,获取所述目标像素点对应的车道线点集。
  8. 如权利要求7所述的装置,其特征在于,所述确定候选像素点单元,具体用于:
    生成所述车道线图像的置信度图谱;其中,所述置信度图谱包含所述车道线图像中各 个像素点的置信度值;所述置信度值用于表征所述车道线图像中各个像素点属于所述车道线区域的可信程度;
    在所述置信度图谱中,确定用于识别所述车道线区域的候选像素点。
  9. 如权利要求7所述的装置,其特征在于,所述至少三个位置点包括第一位置点、第二位置点和第三位置点;所述目标像素点所在像素行的上N行为所述第一位置点所在的像素行;所述目标像素点所在像素行与所述第二位置点所在像素行相同;所述目标像素点所在像素行的下M行为所述第三位置点所在的像素行;所述M、N为大于0的整数;所述扩展单元,具体用于:
    根据第一偏移量对所述目标像素点的起始位置进行调整,得到所述目标像素点的最终位置;所述第一偏移量为所述目标像素点到所述第二位置点的偏移量;
    以所述目标像素点的最终位置为起点,获取在所述目标像素点上根据第二偏移量扩展得到的第一车道线点,并获取在所述目标像素点上根据第三偏移量扩展得到的第二车道线点;所述第二偏移量为所述目标像素点到所述第一位置点的偏移量;所述第三偏移量为所述目标像素点到所述第三位置点的偏移量;
    在所述第一车道线点对应的置信度值大于第一置信度值的情况下,以所述第一车道线点为当前目标像素点,执行获取在所述目标像素点上根据所述第一偏移量和所述第二偏移量扩展得到的第一车道线点的步骤,直至扩展得到的第一车道线点对应的置信度值不大于所述第一置信度值;
    在所述第二车道线点对应的置信度值大于第一置信度值的情况下,以所述第二车道线点为当前目标像素点,执行获取在所述目标像素点上根据所述第一偏移量和所述第三偏移量扩展得到的第二车道线点的步骤,直至扩展得到的第二车道线点对应的置信度值不大于所述第一置信度值。
  10. 如权利要求7-9任一项所述的装置,其特征在于,所述获取位置单元包括像素点选择单元,其中,
    所述像素点选择单元,用于根据置信度图谱,在所述车道线图像中选择目标像素行;所述目标像素行为第一像素点所在邻域内目标置信度极大值数量最多的一个像素行;所述第一像素点为大于第二置信度阈值的像素点;所述目标置信度极大值大于所述第二置信度阈值;所述置信度图谱包含所述车道线图像中各个像素点的置信度值;所述置信度值用于表征所述车道线图像中各个像素点属于所述车道线区域的可信程度;
    在所述目标像素行中,选择大于所述第二置信度值的所有像素点为所述目标像素点,或,选择大于所述第二置信度值的像素点邻域内置信度极大值的像素点作为所述目标像素点。
  11. 如权利要求7-9任一项所述的装置,其特征在于,所述获取位置单元包括像素点选择单元,其中,
    所述像素点选择单元,用于根据置信度图谱,在所述车道线图像中选择目标像素行; 所述目标像素行为第二像素点的数量大于目标数值的多个像素行;所述第二像素点为大于第二置信度阈值的像素点;所述置信度图谱包含所述车道线图像中各个像素点的置信度值;所述置信度值用于表征所述车道线图像中各个像素点属于所述车道线区域的可信程度;
    在所述目标像素行中,选择大于所述第二置信度值的所有像素点为所述目标像素点。
  12. 如权利要求7所述的装置,其特征在于,在所述目标像素点来自多个像素行的情况下,所述装置还包括:
    去重单元,用于获取两两车道线点集之间的重叠度;若两两车道线点集之间的重叠度大于目标阈值,删除所述两两车道线点集中的任一车道线点集。
  13. 一种车道线检测设备,其特征在于,包括处理器和存储器,所述处理器和存储器相互连接,其中,所述存储器用于存储计算机程序,所述计算机程序包括程序指令,所述处理器被配置用于调用所述程序指令,执行如权利要求1-6任一项所述的方法。
  14. 一种芯片,所述芯片包括处理器、存储器和通信接口,其特征在于,所述处理器通过所述通信接口读取所述存储器上存储的指令,执行如权利要求1-6任一项所述的方法。
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时使所述处理器执行如权利要求1-6任一项所述的方法。
PCT/CN2020/114289 2020-09-09 2020-09-09 车道线检测方法、相关设备及计算机可读存储介质 WO2022051951A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/CN2020/114289 WO2022051951A1 (zh) 2020-09-09 2020-09-09 车道线检测方法、相关设备及计算机可读存储介质
EP20952739.9A EP4202759A4 (en) 2020-09-09 2020-09-09 TRAFFIC LANE LINE DETECTION METHOD, ASSOCIATED DEVICE AND COMPUTER READABLE STORAGE MEDIUM
CN202080006576.2A CN114531913A (zh) 2020-09-09 2020-09-09 车道线检测方法、相关设备及计算机可读存储介质
US18/180,274 US20230215191A1 (en) 2020-09-09 2023-03-08 Lane line detection method, related device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/114289 WO2022051951A1 (zh) 2020-09-09 2020-09-09 车道线检测方法、相关设备及计算机可读存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/180,274 Continuation US20230215191A1 (en) 2020-09-09 2023-03-08 Lane line detection method, related device, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
WO2022051951A1 true WO2022051951A1 (zh) 2022-03-17

Family

ID=80632598

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/114289 WO2022051951A1 (zh) 2020-09-09 2020-09-09 车道线检测方法、相关设备及计算机可读存储介质

Country Status (4)

Country Link
US (1) US20230215191A1 (zh)
EP (1) EP4202759A4 (zh)
CN (1) CN114531913A (zh)
WO (1) WO2022051951A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091648A (zh) * 2023-02-09 2023-05-09 禾多科技(北京)有限公司 车道线的生成方法及装置、存储介质及电子装置

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115131968B (zh) * 2022-06-28 2023-07-11 重庆长安汽车股份有限公司 一种基于车道线点集与注意力机制的匹配融合方法
CN115471708B (zh) * 2022-09-27 2023-09-12 禾多科技(北京)有限公司 车道线类型信息生成方法、装置、设备和计算机可读介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839264A (zh) * 2014-02-25 2014-06-04 中国科学院自动化研究所 一种车道线的检测方法
US20150055831A1 (en) * 2012-03-19 2015-02-26 Nippon Soken, Inc. Apparatus and method for recognizing a lane
CN109583280A (zh) * 2017-09-29 2019-04-05 比亚迪股份有限公司 车道线识别方法、装置、设备及存储介质
CN109740469A (zh) * 2018-12-24 2019-05-10 百度在线网络技术(北京)有限公司 车道线检测方法、装置、计算机设备和存储介质
CN111126106A (zh) * 2018-10-31 2020-05-08 沈阳美行科技有限公司 一种车道线识别方法和装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875603B (zh) * 2018-05-31 2021-06-04 上海商汤智能科技有限公司 基于车道线的智能驾驶控制方法和装置、电子设备
CN109147368A (zh) * 2018-08-22 2019-01-04 北京市商汤科技开发有限公司 基于车道线的智能驾驶控制方法装置与电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150055831A1 (en) * 2012-03-19 2015-02-26 Nippon Soken, Inc. Apparatus and method for recognizing a lane
CN103839264A (zh) * 2014-02-25 2014-06-04 中国科学院自动化研究所 一种车道线的检测方法
CN109583280A (zh) * 2017-09-29 2019-04-05 比亚迪股份有限公司 车道线识别方法、装置、设备及存储介质
CN111126106A (zh) * 2018-10-31 2020-05-08 沈阳美行科技有限公司 一种车道线识别方法和装置
CN109740469A (zh) * 2018-12-24 2019-05-10 百度在线网络技术(北京)有限公司 车道线检测方法、装置、计算机设备和存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091648A (zh) * 2023-02-09 2023-05-09 禾多科技(北京)有限公司 车道线的生成方法及装置、存储介质及电子装置
CN116091648B (zh) * 2023-02-09 2023-12-01 禾多科技(北京)有限公司 车道线的生成方法及装置、存储介质及电子装置

Also Published As

Publication number Publication date
CN114531913A (zh) 2022-05-24
EP4202759A1 (en) 2023-06-28
US20230215191A1 (en) 2023-07-06
EP4202759A4 (en) 2023-10-25

Similar Documents

Publication Publication Date Title
CN109901574B (zh) 自动驾驶方法及装置
WO2022027304A1 (zh) 一种自动驾驶车辆的测试方法及装置
WO2021027568A1 (zh) 障碍物避让方法及装置
WO2021135371A1 (zh) 一种自动驾驶方法、相关设备及计算机可读存储介质
WO2021000800A1 (zh) 道路可行驶区域推理方法及装置
WO2022001773A1 (zh) 轨迹预测方法及装置
WO2021102955A1 (zh) 车辆的路径规划方法以及车辆的路径规划装置
WO2021103511A1 (zh) 一种设计运行区域odd判断方法、装置及相关设备
WO2022021910A1 (zh) 一种车辆碰撞检测方法、装置及计算机可读存储介质
WO2021212379A1 (zh) 车道线检测方法及装置
WO2022051951A1 (zh) 车道线检测方法、相关设备及计算机可读存储介质
WO2021057344A1 (zh) 一种数据呈现的方法及终端设备
WO2022062825A1 (zh) 车辆的控制方法、装置及车辆
CN112512887A (zh) 一种行驶决策选择方法以及装置
WO2022001366A1 (zh) 车道线的检测方法和装置
US20230048680A1 (en) Method and apparatus for passing through barrier gate crossbar by vehicle
EP4307251A1 (en) Mapping method, vehicle, computer readable storage medium, and chip
CN114693540A (zh) 一种图像处理方法、装置以及智能汽车
WO2022178858A1 (zh) 一种车辆行驶意图预测方法、装置、终端及存储介质
WO2021110166A1 (zh) 道路结构检测方法及装置
WO2021159397A1 (zh) 车辆可行驶区域的检测方法以及检测装置
WO2022061702A1 (zh) 驾驶提醒的方法、装置及系统
CN115508841A (zh) 一种路沿检测的方法和装置
WO2022061725A1 (zh) 交通元素的观测方法和装置
WO2022041820A1 (zh) 换道轨迹的规划方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20952739

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020952739

Country of ref document: EP

Effective date: 20230323

NENP Non-entry into the national phase

Ref country code: DE