WO2019228211A1 - 基于车道线的智能驾驶控制方法和装置、电子设备 - Google Patents

基于车道线的智能驾驶控制方法和装置、电子设备 Download PDF

Info

Publication number
WO2019228211A1
WO2019228211A1 PCT/CN2019/087622 CN2019087622W WO2019228211A1 WO 2019228211 A1 WO2019228211 A1 WO 2019228211A1 CN 2019087622 W CN2019087622 W CN 2019087622W WO 2019228211 A1 WO2019228211 A1 WO 2019228211A1
Authority
WO
WIPO (PCT)
Prior art keywords
lane line
vehicle
lane
preset threshold
driving control
Prior art date
Application number
PCT/CN2019/087622
Other languages
English (en)
French (fr)
Inventor
刘文志
于晨笛
程光亮
朱海波
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Priority to JP2020554361A priority Critical patent/JP7024115B2/ja
Priority to SG11202005094XA priority patent/SG11202005094XA/en
Publication of WO2019228211A1 publication Critical patent/WO2019228211A1/zh
Priority to US16/886,163 priority patent/US11314973B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/167Driving aids for lane monitoring, lane changing, e.g. blind spot detection

Definitions

  • Lane line detection is a key technology in automatic driving and assisted driving. This technology can detect the lane line of a vehicle on the road, so as to determine the current position of the vehicle and provide key information for the next warning.
  • the embodiments of the present disclosure provide a technical solution for intelligent driving control based on lane lines.
  • a lane driving-based intelligent driving control device including:
  • a determining module configured to determine an estimated distance that the vehicle exits the lane line and / or an estimated time that the vehicle exits the lane line according to a driving state of the vehicle and a detection result of the lane line;
  • a control module configured to perform intelligent driving control on the vehicle according to the estimated distance and / or the estimated time.
  • an electronic device including:
  • a processor is configured to execute a computer program stored in the memory, and when the computer program is executed, implement the method according to any one of the foregoing embodiments of the present disclosure.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the method according to any one of the foregoing embodiments of the present disclosure is implemented.
  • a computer program including computer instructions, and when the computer instructions are run in a processor of a device, the method according to any one of the foregoing embodiments of the present disclosure is implemented.
  • a computer program product for storing computer-readable instructions, which when executed, cause a computer to perform detection of a human keypoint described in any one of the foregoing possible implementation manners method.
  • the computer program product is a computer storage medium.
  • the computer program product is a software product, such as a Software Development Kit (SDK), etc. .
  • a lane line detection result of a vehicle driving environment is obtained, and a vehicle driving is determined according to a vehicle driving state and a lane line detection result
  • the embodiment of the present disclosure implements the driving state of the vehicle based on the lane line Intelligent control helps to improve driving safety.
  • FIG. 1 is a flowchart of an embodiment of a lane line-based intelligent driving control method according to the present disclosure.
  • FIG. 2 is a flowchart of another embodiment of a lane line-based intelligent driving control method according to the present disclosure.
  • FIG. 3 is a flowchart of still another embodiment of a lane line-based intelligent driving control method according to the present disclosure.
  • FIG. 4 is an example of two lane lines in the embodiment of the present disclosure.
  • FIG. 5 is a schematic structural diagram of an embodiment of an intelligent driving control device based on lane lines of the present disclosure.
  • FIG. 6 is a schematic structural diagram of another embodiment of a lane line-based intelligent driving control device according to the present disclosure.
  • FIG. 7 is a schematic structural diagram of an application embodiment of an electronic device of the present disclosure.
  • a plurality may refer to two or more, and “at least one” may refer to one, two, or more.
  • Embodiments of the present disclosure can be applied to electronic devices such as terminal devices, computer systems, and servers, which can operate with many other general-purpose or special-purpose computing system environments or configurations.
  • Examples of well-known terminal devices, computing systems, environments, and / or configurations suitable for use with electronic devices such as terminal devices, computer systems, servers include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients Computers, handheld or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, small computer systems, mainframe computer systems, and distributed cloud computing technology environments including any of these systems, and more.
  • Electronic devices such as a terminal device, a computer system, and a server can be described in the general context of computer system executable instructions (such as program modules) executed by a computer system.
  • program modules can include routines, programs, target programs, components, logic, data structures, and so on, which perform specific tasks or implement specific abstract data types.
  • the computer system / server can be implemented in a distributed cloud computing environment. In a distributed cloud computing environment, tasks are performed by remote processing devices linked through a communication network. In a distributed cloud computing environment, program modules may be located on a local or remote computing system storage medium including a storage device.
  • FIG. 1 is a flowchart of an embodiment of a lane line-based intelligent driving control method according to the present disclosure. As shown in FIG. 1, the lane line-based intelligent driving control method of this embodiment includes:
  • the lane line detection result in the vehicle driving environment may be obtained, for example, by detecting the lane line in the vehicle driving environment based on a neural network, for example, using a neural network to image the vehicle driving environment. Perform lane line detection to obtain the lane line detection result; or, directly obtain the lane line detection result in the driving environment of the vehicle from the Advanced Driving Assistance System (ADAS), that is, directly use the lane line detection result in the ADAS;
  • ADAS Advanced Driving Assistance System
  • the operation 102 may be performed by a processor calling a corresponding instruction stored in a memory, or may be performed by an acquisition module executed by the processor.
  • the operation 104 may be performed by a processor calling a corresponding instruction stored in a memory, or may be performed by a determining module executed by the processor.
  • the intelligent driving control performed on the vehicle may include, but is not limited to, controlling at least one of the following: automatic driving control, assisted driving control, and the like.
  • the automatic driving control of the vehicle may include, but is not limited to, performing any one or more of the following controls on the vehicle: braking, deceleration, changing the driving direction, lane lane keeping, driving mode switching control (for example, from automatic The driving mode is switched to a non-automatic driving mode, the non-automatic driving mode is switched to an automatic driving mode), and so on.
  • the driving mode switching control may control the vehicle to switch from an automatic driving mode to a non-automatic driving mode (such as a manual driving mode), or to switch from a non-automatic driving mode to an automatic driving mode.
  • the assisted driving control of the vehicle may include, but is not limited to, performing any one or more of the following controls on the vehicle: warning of lane line deviation, prompting of lane line keeping, etc., which help prompt the driver to control the driving state of the vehicle Operation.
  • the operation 106 may be performed by a processor calling a corresponding instruction stored in the memory, or may be performed by a control module executed by the processor.
  • a lane line detection result of a vehicle driving environment is obtained, and an estimated distance and / or a lane line of the vehicle from the lane line detection result is determined according to the driving state of the vehicle and the lane line detection result.
  • the estimated time for the vehicle to drive out of the lane line and perform intelligent driving control such as automatic driving or assisted driving on the vehicle according to the estimated distance and / or estimated time.
  • the embodiment of the present disclosure implements intelligent control of the driving state of the vehicle based on the lane line. In order to reduce or avoid traffic accidents when vehicles exit the lane line, it will help improve driving safety.
  • FIG. 2 is a flowchart of another embodiment of a lane line-based intelligent driving control method according to the present disclosure. As shown in FIG. 2, the lane driving-based intelligent driving control method of this embodiment includes:
  • the lane line probability map is used to indicate a probability value that at least one pixel point in the image belongs to the lane line.
  • the neural network in the embodiment of the present disclosure may be a deep neural network, such as a convolutional neural network, which may be obtained by training the neural network in advance by using a sample image and a pre-labeled and accurate lane line probability map.
  • training a neural network by using a sample image and an accurate lane line probability map can be achieved, for example, by: performing semantic segmentation on the sample image through a neural network, and outputting a predicted lane line probability map; according to the predicted lane line probability map and the accuracy
  • the difference between the corresponding lane line probability maps of at least one pixel point obtain the loss function value of the neural network, and train the neural network based on the loss function value, for example, based on the gradient update training method, back-propagating the gradient through the chain rule , Adjusting the parameter values of the parameters of each network layer in the neural network until a preset condition is satisfied, for example, the difference between the predicted lane line probability map and the accurate lane line probability map at least one pixel point is smaller than the prese
  • the operation 202 may be performed by a processor calling a corresponding instruction stored in a memory, or may be performed by a detection unit run by a processor or a neural network in the detection unit.
  • the operation 204 may be performed by a processor calling a corresponding instruction stored in the memory, or may be performed by a detection unit or a determination subunit in the detection unit that is run by the processor.
  • the operation 206 may be performed by a processor calling a corresponding instruction stored in the memory, or may be performed by a determining module executed by the processor.
  • the operation 208 may be performed by a processor calling a corresponding instruction stored in the memory, or may be performed by a control module executed by the processor.
  • the image is semantically segmented by a neural network, a lane line probability map is output, and an area where the lane line is located is determined according to the lane line probability map.
  • the neural network can be based on deep learning, it can automatically learn the lane line images by learning a large number of labeled lane line images, such as lane lines, lane line missing, road edge edges, dim light, backlighting and other scenes.
  • lane lanes can be effectively identified in various driving scenarios to achieve corners, lane lane missing, road edge, and dim light
  • Lane line detection in various complex scenarios, such as backlight and backlighting improves the accuracy of lane line detection in order to obtain accurate estimated distance and / or estimated time, thereby helping to improve the accuracy of intelligent driving control and improve driving safety .
  • the method may further include: preprocessing the original image including the driving environment of the vehicle to obtain the foregoing driving environment including the vehicle. Image.
  • the neural network is used to perform semantic segmentation on the above-mentioned image obtained by preprocessing.
  • the neural network pre-processes the original image.
  • the original image collected by the camera can be scaled and cropped.
  • the original image is scaled and cropped to an image of a preset size.
  • the neural network is processed to reduce the image processed by the neural network. The complexity of semantic segmentation, reduces the time-consuming, and improves the processing efficiency.
  • the preprocessing of the original image by the neural network may also be to select some quality-selectable images from the original images collected by the camera according to preset image quality (such as image sharpness, exposure, etc.) standards and input them to the neural network for processing. So as to improve the accuracy of semantic segmentation in order to improve the accuracy of lane line detection.
  • image quality such as image sharpness, exposure, etc.
  • a neural network is used to perform semantic segmentation on an image including the driving environment of the vehicle, and outputting a lane line probability map may include:
  • the feature map is semantically segmented by a neural network to obtain a lane line probability map of N lane lines.
  • the pixel value of each pixel in the lane line probability map of each lane is used to indicate the probability value that the corresponding pixel point in the image belongs to the lane line, and the value of N is an integer greater than 0. For example, in some alternative examples, N has a value of 4.
  • the neural network in the embodiments of the present disclosure may include a network layer for feature extraction and a network layer for classification.
  • the network layer used for feature extraction may include, for example, a convolution layer, a batch normalization (BN) layer, and a non-linear layer.
  • Feature extraction is performed on the image through the convolutional layer, the BN layer, and the non-linear layer in turn, and a feature map is generated; the feature map is semantically segmented through the network layer for classification, and the lane line probability map of multiple lane lines is obtained.
  • the lane line probability map of the N lane lines may be a channel probability map, and the pixel values of each pixel in the probability map respectively represent the probability values of corresponding pixel points in the image belonging to the lane lines.
  • the lane line probability map of the above N lane lines may also be a probability map of N + 1 lanes, and the N + 1 lanes respectively correspond to the N lane lines and the background, that is, the probability of N + 1 lanes
  • the probability map of each channel in the figure represents the probability that at least one pixel point in the above image belongs to the lane line or background corresponding to the channel, respectively.
  • the feature map is semantically segmented through a neural network to obtain a lane line probability map of N lane lines, which may include:
  • the feature map is semantically segmented by a neural network to obtain a probability map of N + 1 channels.
  • the N + 1 channels respectively correspond to N lane lines and backgrounds, that is, the probability map of each channel in the probability map of N + 1 channels indicates that at least one pixel point in the above image belongs to the lane corresponding to the channel, respectively.
  • a lane line probability map of N lane lines is obtained from a probability map of N + 1 channels.
  • the neural network in the embodiment of the present disclosure may include a network layer for feature extraction, a network layer for classification, and a normalization (Softmax) layer.
  • Feature extraction is performed on the image through each network layer used for feature extraction in order to generate a series of feature maps; the final output feature map is semantically segmented through the network layer used for classification to obtain the lane line probability of N + 1 channels Figure;
  • Softmax uses the Softmax layer to normalize the lane line probability map of N + 1 channels to convert the probability value of each pixel in the lane line probability map to a value in the range of 0 to 1.
  • the network layer used for classification may multi-classify each pixel in the feature map.
  • each pixel in the feature map belongs to five categories (background, left and left lane line, left lane line, right lane line, and Right and right lane lines), and output the probability map of each pixel in the feature map to one of the types, to obtain the probability map of the above N + 1 channels.
  • the probability value of each pixel in each probability map is expressed The probability value that a pixel in the image corresponding to this pixel belongs to a certain category.
  • N is the number of lane lines in the driving environment of the vehicle, and may be any integer value greater than 0.
  • N + 1 channels correspond to the background, left lane line, and right lane line in the driving environment of the vehicle; or when the value of N is 3, N + 1 channels correspond to Background, left lane line, middle lane line, and right lane line in the vehicle driving environment; or, when N is 4, the N + 1 channels correspond to the background, left and left lane lines, Left lane line, right lane line, and right lane line.
  • determining the area where the lane line is located according to the lane line probability map of one lane line in operation 204 may include:
  • a maximum connected domain search is performed in the lane line probability map to find the pixel point set belonging to the lane line;
  • the area where the lane line is located is determined based on the pixel point set belonging to the lane line.
  • a breadth-first search algorithm may be used to find the maximum connected area, find all connected areas with probability values greater than a first preset threshold, and then compare the largest areas of all connected areas as the area where the detected lane line is located.
  • the output of the neural network is a lane line probability map of multiple lane lines.
  • the pixel value of each pixel in the lane line probability map represents the probability value of the pixel in the corresponding image belonging to a lane line.
  • the value can be 0 after normalization.
  • the pixel points in the lane line probability map that have a high probability that belong to the lane line probability map are selected through the first preset threshold, and then the maximum connected domain search is performed to find the set of pixels that belong to the lane line as the lane line. your region. Perform the above operations for each lane line separately to determine the area where each lane line is located.
  • the above determining the area where the lane line is located based on a set of pixels belonging to the lane line may include:
  • the area formed by the pixel set is used as the area where the lane line is located.
  • the confidence level is a probability value that an area formed by a set of pixel points is a real lane line.
  • the second preset threshold is an empirical value set according to actual needs, and can be adjusted according to actual scenarios.
  • the confidence level is too small, that is, not greater than the second preset threshold, it indicates that the lane line does not exist, and the determined lane line is discarded; if the confidence level is large, that is, greater than the second preset threshold, it indicates that the determined lane line is located The probability that the lane line is a real one is high, and it is determined as the area where the lane line is located.
  • FIG. 3 is a flowchart of another embodiment of a lane line-based intelligent driving control method according to the present disclosure. As shown in FIG. 3, the lane line-based intelligent driving control method of this embodiment includes:
  • the lane line probability map is used to indicate a probability value that at least one pixel point in the image belongs to the lane line.
  • the operation 302 may be performed by a processor calling a corresponding instruction stored in a memory, or may be performed by a detection unit run by a processor or a neural network in the detection unit.
  • the operation 304 may be performed by a processor calling a corresponding instruction stored in the memory, or may be performed by a detection unit or a determination subunit in the detection unit that is run by the processor.
  • the lane line information includes a distance from at least one point on the lane line (for example, points on the lane line) to the vehicle.
  • lane line information can be a curve, a straight line, a discrete map including at least one point on the lane line and its distance to the vehicle, or a data table, or it can be expressed as an equation. Wait, the embodiment of the present disclosure does not limit the expression form of the lane line information.
  • the lane line equation has three parameters (a, b, c).
  • the two curves are two lane lines corresponding to the two lane line equations.
  • Y - max represents the maximum distance from a point on the lane line to the front vertical direction of the vehicle
  • Y - min represents the minimum distance from a point on the lane line to the front vertical direction of the vehicle.
  • the operation 306 may be performed by a processor calling a corresponding instruction stored in the memory, or may be performed by a determining module run by the processor or a fitting processing unit in the determining module.
  • the operation 308 may be performed by a processor calling a corresponding instruction stored in the memory, or may be performed by a determination module or a determination unit in the determination module that is executed by the processor.
  • the operation 310 may be performed by a processor calling a corresponding instruction stored in a memory, or may be performed by a control module executed by the processor.
  • the lane line information of each lane line is obtained by performing curve fitting on the pixels in the area where each lane line is located, and based on the driving state of the vehicle and the lane line.
  • the information determines the estimated distance of the vehicle from the corresponding lane line and / or the estimated time of the vehicle from the lane line. Since the lane line information obtained by curve fitting can be expressed as a quadratic curve or a similar representation, it can be better posted.
  • the curve lane lanes still have good applicability to curves and can be applied to various road conditions for early warning.
  • curve fitting is performed on pixels in an area where a lane line is located to obtain lane line information of the lane line, which may include:
  • the selected multiple pixels are converted from the camera coordinate system in which the camera is located into the world coordinate system, and the coordinates of the multiple pixels in the world coordinate system are obtained.
  • the origin of the world coordinate system can be set according to requirements. For example, the origin can be set as the location where the left front wheel of the vehicle is landing, and the y-axis direction in the world coordinate system is the direction directly in front of the vehicle;
  • curve fitting is performed on the plurality of pixel points in the world coordinate system to obtain lane line information of the one lane line.
  • some pixels can be randomly selected in the area where a lane line is located.
  • the camera calibration parameters also called camera calibration parameters
  • these pixels are converted into the world coordinate system, and then these pixels are converted in the world coordinate system.
  • Point fitting curve you can get the fitted curve.
  • the camera calibration parameters can include internal and external parameters.
  • the position and orientation of the camera or camera in the world coordinate system can be determined based on the external parameters.
  • the external parameters can include a rotation matrix and a translation matrix.
  • the rotation matrix and the translation matrix together describe how to convert points from the world coordinate system to the camera coordinate system. Or vice versa; internal parameters are parameters related to the characteristics of the camera itself, such as the focal length and pixel size of the camera.
  • the curve fitting refers to calculating the curve formed by these points through some discrete points.
  • a least square method may be used to perform curve fitting based on the multiple pixel points.
  • the method may further include: filtering parameters in the lane line information of the lane line to filter out jitter and some abnormal situations, and ensure the stability of the lane line information.
  • filtering parameters in the lane line information of a lane line may include:
  • the previous frame image is a frame image in which the detection timing is located before the image in the video where the image is located, for example, it may be the image immediately before the image adjacent to it, or it may be the detection timing is located before the image, spaced one frame or Multi-frame image.
  • Kalman filtering is an estimation method based on the statistical characteristics of a time-varying random signal to make the future value of the signal as close to the true value as possible.
  • the parameter of the parameter in the lane line information Kalman filtering of the values can improve the accuracy of the lane line information and help to accurately determine the distance between the vehicle and the lane line in the subsequent information, so as to accurately warn the vehicle from the lane line.
  • the method may further include: selecting lane driving information for the same driving lane.
  • the lane line information fitted based on each frame of the image in the video will change, but the adjacent frame images will not change much, so the lane line information of the current frame image can be smoothed, and the jitter and Some abnormal conditions ensure the stability of lane line information.
  • a lane line can be determined for the first frame image participating in lane line detection in the video, and a tracker is established for each lane line to track the lane line. If the current frame image detects the same A lane line, and the difference between the parameter values in the lane line information of the same lane line determined from the previous frame image is smaller than the third preset threshold, then the current frame image The parameter values in the lane line information are updated to the tracker of the same lane line determined in the previous frame image to perform Kalman filtering on the lane line information of the same lane line in the current frame image. If the tracker of the same lane line is updated in two consecutive frames of images, it indicates that the determination result of the lane line is more accurate. The tracker of the lane line can be confirmed, and the lane line tracked by the tracker is set as final. Lane line results.
  • the tracker is not updated for several consecutive frames, the corresponding lane line is considered to disappear and the tracker is deleted.
  • determining the estimated distance of the vehicle from the lane line according to the driving state of the vehicle and the lane line detection result may include:
  • the estimated distance between the vehicle and the corresponding lane line is determined according to the vehicle's position in the world coordinate system and the lane line information of the lane line; in this embodiment, the driving state of the vehicle includes the vehicle's in the world coordinate system position.
  • determining the estimated time for the vehicle to exit the lane line according to the driving state of the vehicle and the lane line detection result may include:
  • the driving state of the vehicle includes the speed of the vehicle and the position of the vehicle in the world coordinate system.
  • performing intelligent driving control on the vehicle according to the above estimated distance and / or estimated time may include:
  • intelligent driving control corresponding to the satisfied preset conditions is performed, for example, automatic driving control and / or assisted driving control corresponding to the satisfied preset conditions is performed.
  • the degree of intelligent driving control corresponding to each of the multiple preset conditions may be gradually increased.
  • the degree of intelligent driving control corresponding to multiple preset conditions may be increased step by step.
  • corresponding intelligent driving control measures may be adopted to control the vehicle. Carrying out corresponding automatic driving control and / or assisted driving control can effectively prevent vehicles from driving out of lane lanes and avoid traffic accidents without interfering with normal driving, and improve driving safety.
  • the intelligent driving control corresponding to the satisfied preset conditions when the comparison result satisfies one or more preset conditions, and when the intelligent driving control corresponding to the satisfied preset conditions is performed, in some optional examples, it may include:
  • the vehicle is instructed to depart from the lane line, for example, to remind the vehicle that it has deviated from the current lane, will drive out of the current lane line, or the like; or,
  • the vehicle is instructed to depart from the lane line;
  • a lane line deviation prompt is provided to the vehicle
  • the lane line departure warning includes a lane line departure warning.
  • the values of the fourth preset threshold and the fifth preset threshold are greater than 0, and the fifth preset threshold is less than the fourth preset threshold.
  • the values of the fourth preset threshold and the fifth preset threshold are 5 Seconds, 3 seconds.
  • the values of the sixth preset threshold and the seventh preset threshold are greater than 0, and the seventh preset threshold is less than the sixth preset threshold.
  • the values of the sixth preset threshold and the seventh preset threshold are 5 Meters, 3 meters.
  • the estimated distance between the vehicle and the lane line is less than or equal to the fourth preset threshold and greater than the fifth preset threshold, or the estimated time that the vehicle is expected to exit the lane line is less than or equal to the sixth preset threshold and greater than the seventh
  • a lane line deviation warning is given to the vehicle, which can remind the driver to notice that the vehicle deviates from the lane line, in order to take corresponding driving measures in time, prevent the vehicle from driving out of the lane line, and improve driving safety.
  • the lane line departure warning is given by combining the estimated distance between the vehicle and the lane line and the estimated time out of the lane line to improve the accuracy of the lane line departure warning. In a further optional example, it may also include:
  • the estimated distance is less than or equal to the fifth preset threshold, perform automatic driving control and / or a lane departure warning on the vehicle; or,
  • the estimated distance is less than or equal to the fifth preset threshold and the estimated time is less than or equal to the seventh preset threshold, perform automatic driving control and / or a lane line deviation warning on the vehicle;
  • the lane line departure warning includes a lane line departure warning, and the lane line departure warning may be sounded, for example, by sound, light, electricity, or the like.
  • the respective corresponding levels of intelligent driving control are increased step by step, ranging from the prompting of lane departure to the vehicle to the automatic driving control of the vehicle and / Or the lane line deviates from the alarm to prevent vehicles from driving out of the lane line and improve driving safety.
  • the vehicle when the estimated distances determined based on the image and the historical frame image are both less than or equal to a fifth preset threshold, the vehicle may be subjected to automatic driving control and / or a lane line deviation alarm, where the history The frame image includes at least one frame image in the video where the detection sequence is located before the image; or, when the estimated time determined based on the image and the historical frame image are both less than or equal to the seventh preset threshold, the vehicle is subjected to automatic driving control and / Or lane line deviation alarm; or, the estimated distance determined based on the image and the historical frame image are both less than or equal to the fifth preset threshold, and the estimated time determined based on the image and the historical frame image are less than or equal to the seventh preset threshold When the threshold is set, the vehicle is controlled for automatic driving and / or a lane departure warning is given.
  • the evaluation distance and / or the evaluation time of historical frame images are also counted at the same time as the basis for the automatic driving control and / or the lane line departure warning of the vehicle, which can improve the automatic driving control and / or the lane line departure warning of the vehicle. Accuracy.
  • the line segment AB is that the vehicle will drive in the current state.
  • the absolute position A 'of the vehicle in the world coordinate system can be obtained.
  • the intersection position of the straight line A'B of the lane line driving direction and the target lane line position can be calculated.
  • B which gives the length of the straight line A'B.
  • Collect historical frame image information If the vehicle is about to drive out of the target lane line in several frames, the time is too short (less than the seventh preset threshold), and the distance A'B between the vehicle and the target lane line is too short ( Less than the fifth preset threshold value), then automatic driving control and / or lane line deviation alarm is performed, for example, decelerating the vehicle and sounding the alarm at the same time.
  • the historical frame image information can be used to calculate the vehicle's lateral speed at the current moment. Based on the current distance of the vehicle from the target lane line, the crimping time of the vehicle from the target lane line at the current moment (that is, the time of arrival) can be calculated. The time of the target lane line) is used as a basis for whether to perform automatic driving control and / or a lane line deviation warning for the vehicle.
  • the distance between the vehicle and the target lane line can be obtained according to the setting of the origin of the lane line equation coordinates of the target lane line, the direction of travel of the vehicle, and the width of the vehicle. For example, if the coordinate origin of the lane line equation is set to the left wheel of the vehicle and the target lane line is on the left side of the vehicle, then the distance between the vehicle and its intersection with the direction of travel and the target lane line may be obtained directly. If the origin of the lane line equation is set to the right wheel of the vehicle and the target lane line is on the left side of the vehicle, then the distance between the vehicle and its intersection with the direction of the target lane line is added, and the vehicle width projection is used to drive the vehicle.
  • the effective width in the direction is the distance between the vehicle and the target lane line. If the origin of the lane line equation coordinate is set to the center of the vehicle, and the target lane line is on the left side of the vehicle, then the distance between the vehicle and its intersection with the target lane line and the halfway width of the vehicle are projected on it. The effective width in the direction of travel is the estimated distance between the vehicle and the target lane line.
  • the embodiments of the present disclosure can be applied to the scenarios of automatic driving and assisted driving to achieve accurate lane line detection, automatic driving control, and early warning of vehicle departure from lane lines.
  • any of the lane line-based intelligent driving control methods provided by the embodiments of the present disclosure may be executed by any appropriate device having a data processing capability, including but not limited to a terminal device and a server.
  • any of the lane line-based intelligent driving control methods provided in the embodiments of the present disclosure may be executed by a processor.
  • the processor executes any of the lane line-based reference mentioned in the embodiments of the present disclosure by calling a corresponding instruction stored in a memory. Smart driving control method. I will not repeat them below.
  • the foregoing program may be stored in a computer-readable storage medium.
  • the program is executed, the program is executed.
  • the method includes the steps of the foregoing method embodiment.
  • the foregoing storage medium includes: a ROM, a RAM, a magnetic disk, or an optical disk, and other media that can store program codes.
  • FIG. 5 is a schematic structural diagram of an embodiment of an intelligent driving control device based on lane lines of the present disclosure.
  • the lane line-based intelligent driving control device in this embodiment may be used to implement any one of the lane line-based control method embodiments of the present disclosure.
  • the lane driving-based intelligent driving control device in this embodiment includes an acquisition module, a determination module, and a control module. among them:
  • the acquisition module is configured to acquire a lane line detection result of a vehicle running environment.
  • the determining module is configured to determine an estimated distance of the vehicle from the lane line and / or an estimated time of the vehicle from the lane line according to a driving state of the vehicle and a detection result of the lane line.
  • a control module is configured to perform intelligent driving control on the vehicle according to the estimated distance and / or the estimated time.
  • the embodiment of the present disclosure Based on the lane line-based intelligent driving control device provided by the foregoing embodiment of the present disclosure, obtain a lane line detection result of a vehicle driving environment, and determine an estimated distance and / or a lane departure of the vehicle based on the driving state and the lane line detection result of the vehicle. The estimated time for the vehicle to drive out of the lane line, and intelligently drive the vehicle according to the estimated distance and / or the estimated time.
  • the embodiment of the present disclosure implements intelligent control of the driving state of the vehicle based on the lane line in order to keep the vehicle at Drive in the lane line, reduce or avoid traffic accidents when the vehicle exits the lane line, and improve driving safety.
  • the acquisition module may include a detection unit for detecting a lane line of a vehicle driving environment based on a neural network to obtain a lane line detection result; or an acquisition unit for acquiring a vehicle driving environment from an advanced driving assistance system Lane line detection results.
  • the detection unit may include: a neural network for semantically segmenting an image including the driving environment of the vehicle, and outputting a lane line probability map; the lane line probability map is used to indicate that at least one pixel point in the image belongs to The probability value of the lane line; the determination subunit is used to determine the area where the lane line is located according to the lane line probability map; the detection result of the lane line includes the area where the lane line is located.
  • a neural network is used to: extract features from an image through a neural network to obtain a feature map; and perform a semantic segmentation of the feature map through a neural network to obtain a lane line probability map of N lane lines; each The pixel value of each pixel point in the lane line probability map of the lane represents the probability value that the corresponding pixel point in the image belongs to the lane line, and the value of N is an integer greater than 0.
  • the neural network performs semantic segmentation on the feature map to obtain the lane line probability map of N lane lines, which is used to: semantically segment the feature map through the neural network to obtain the probability map of N + 1 channels; N + 1
  • the lanes correspond to the N lane lines and the background, respectively; and the lane line probability maps of the N lane lines are obtained from the probability maps of the N + 1 lanes.
  • the value of N is 2, and N + 1 channels correspond to the background, left lane, and right lane lines; or, the value of N is 3, and N + 1 channels correspond to Background, left lane line, middle lane line, and right lane line; or, the value of N is 4, and N + 1 channels correspond to the background, left and left lane lines, left lane line, right lane line, and right and right lane line, respectively .
  • the determining subunit is used to: select a pixel point with a probability value greater than a first preset threshold value from the lane line probability map of the lane line; perform the maximum in the lane line probability map based on the selected pixel point Connected domain search to find the pixel point set belonging to the lane line; and determining the area where the lane line is located based on the pixel point set belonging to the lane line.
  • the determination subunit determines the area where the lane line is located based on the set of pixels belonging to the lane line, it is used to: calculate the sum of the probability values of all pixels in the set of pixels belonging to the lane line to obtain the confidence level of the lane line; The degree is greater than the second preset threshold, and the area formed by the pixel set is used as the area where the lane line is located.
  • FIG. 6 is a schematic structural diagram of an embodiment of an intelligent driving control device based on lane lines of the present disclosure.
  • the lane driving-based intelligent driving control device in this embodiment further includes a pre-processing module for pre-processing the original image including the driving environment of the vehicle.
  • the neural network when the neural network performs semantic segmentation on the image including the driving environment of the vehicle, it is used to perform semantic segmentation on the preprocessed image.
  • the determination module may include: a fitting processing unit, configured to perform curve fitting on the pixel points in the area where each lane line is located to obtain lane line information of each lane line; the lane line information includes A distance from at least one point on the lane line to the vehicle; a determining unit, configured to determine an estimated distance of the vehicle from the lane line and / or an estimated time of the vehicle from the lane line according to the driving state of the vehicle and the lane line information of the lane line.
  • a fitting processing unit is used to: select multiple pixels from the area where a lane line is located; convert multiple pixels from the camera coordinate system where the camera is located to the world coordinate system to obtain multiple The coordinates of the pixel points in the world coordinate system; and according to the coordinates of the multiple pixel points in the world coordinate system, curve fitting is performed on the multiple pixel points in the world coordinate system to obtain lane line information of the lane line.
  • the determining module may further include: a filtering unit, configured to filter parameters in the lane line information of the lane line.
  • the determining unit is configured to determine the estimated distance of the vehicle from the lane line and / or the estimated time of the vehicle from the lane line according to the driving state of the vehicle and the lane line information obtained by the filtering.
  • the filtering unit is configured to determine the parameter values in the lane line information based on the parameter values of the parameters in the lane line information and the parameter values in the historical lane line information of the lane lines obtained based on the previous frame image.
  • the parameter value is Kalman filtered; the previous frame image is a frame image in which the detection timing is located before the image in the video where the image is located.
  • the determining module may further include: a selecting unit configured to select a parameter value of the parameter in the lane line information relative to the parameter value of the corresponding parameter in the historical lane line information, and the lane line information The difference between the parameter value of the middle parameter and the parameter value of the corresponding parameter in the historical lane line information is less than the third preset threshold lane line information, and the Kalman filtering is performed as valid lane line information.
  • a selecting unit configured to select a parameter value of the parameter in the lane line information relative to the parameter value of the corresponding parameter in the historical lane line information, and the lane line information The difference between the parameter value of the middle parameter and the parameter value of the corresponding parameter in the historical lane line information is less than the third preset threshold lane line information, and the Kalman filtering is performed as valid lane line information.
  • the determining module determines the estimated distance of the vehicle from the lane line according to the driving state of the vehicle and the lane line detection result, and is used to determine the position of the vehicle in the world coordinate system and the lane line information of the lane line.
  • the driving state of the vehicle includes the vehicle's position in the world coordinate system.
  • the determination module determines the estimated event of the vehicle exiting the lane line according to the driving state of the vehicle and the lane line detection result, and is used for determining the speed of the vehicle, the position of the vehicle in the world coordinate system, and the lane line.
  • the lane line information is used to determine the estimated time for the vehicle to drive out of the lane line; the driving state of the vehicle includes the speed of the vehicle and the position of the vehicle in the world coordinate system.
  • control module may include a comparison unit configured to compare the estimated distance and / or estimated time with at least a predetermined threshold; and a control unit configured to satisfy one or more of the comparison results For each preset condition, intelligent driving control corresponding to the preset condition that is satisfied is performed; intelligent driving control includes: automatic driving control and / or assisted driving control.
  • the intelligent driving control performed on the vehicle may include, but is not limited to, controlling at least one of the following: automatic driving control, assisted driving control, and the like.
  • the automatic driving control of the vehicle may include, but is not limited to, performing any one or more of the following controls on the vehicle: braking, deceleration, changing the driving direction, lane line keeping, driving mode switching control, etc. to control the vehicle Operation in driving state.
  • the assisted driving control of the vehicle may include, but is not limited to, performing any one or more of the following controls on the vehicle: warning of lane line deviation, prompting of lane line keeping, etc., which help prompt the driver to control the driving state of the vehicle Operation.
  • the degree of intelligent driving control corresponding to each of the multiple preset conditions may be gradually increased.
  • control unit is configured to: if the estimated distance is less than or equal to the fourth preset threshold and greater than the fifth preset threshold, prompt a lane line deviation for the vehicle; or, if the estimated time is less than or equal to the sixth A lane threshold deviation reminder is provided for a preset threshold and greater than the seventh preset threshold; or, if the estimated distance is less than or equal to the fourth preset threshold and greater than the fifth preset threshold, and the estimated time is less than or equal to the sixth
  • the preset threshold value is greater than the seventh preset threshold value, and a lane line deviation prompt is provided to the vehicle.
  • the lane line departure warning includes a lane line departure warning; the fifth preset threshold is smaller than the fourth preset threshold, and the seventh preset threshold is smaller than the sixth preset threshold.
  • control unit may be further configured to: if the estimated distance is less than or equal to the fifth preset threshold, perform automatic driving control and / or a lane line departure warning on the vehicle; or, if the estimated time is less than or equal to the seventh preset threshold Set a threshold to perform automatic driving control and / or lane departure warning on the vehicle; or, if the estimated distance is less than or equal to the fifth preset threshold and the estimated time is less than or equal to the seventh preset threshold, perform automatic driving control / Or lane departure warning.
  • the lane line departure warning includes a lane line departure warning.
  • control unit may be further configured to: if the estimated distance is less than or equal to the fifth preset threshold, perform automatic driving control on the vehicle and / or the lane line deviation alarm, if the image-based and historical frame images are used; The determined estimated distances are all less than or equal to a fifth preset threshold, and the vehicle is subjected to automatic driving control and / or lane line departure warning; the historical frame image includes at least one frame in the video where the detection sequence is located before the image; or, If the estimated time is less than or equal to the seventh preset threshold, when the vehicle is under automatic driving control and / or the lane line departure alarm is used for: if the estimated time determined based on the image and the historical frame image are less than or equal to the seventh preset threshold Thresholds, auto-driving control and / or lane departure warning of the vehicle; or if the estimated distance is less than or equal to the fifth preset threshold and the estimated time is less than or equal to the seventh preset threshold, the vehicle is subject to auto-
  • An embodiment of the present disclosure further provides an electronic device including a lane line-based intelligent driving control device according to any one of the foregoing embodiments of the present disclosure.
  • the embodiment of the present disclosure further provides another electronic device, including: a memory for storing executable instructions; and a processor for communicating with the memory to execute the executable instructions to complete the lane-based system according to any one of the embodiments of the present disclosure.
  • a memory for storing executable instructions
  • a processor for communicating with the memory to execute the executable instructions to complete the lane-based system according to any one of the embodiments of the present disclosure.
  • the operation of the intelligent driving control method of the line including: a memory for storing executable instructions; and a processor for communicating with the memory to execute the executable instructions to complete the lane-based system according to any one of the embodiments of the present disclosure.
  • FIG. 7 is a schematic structural diagram of an application embodiment of an electronic device of the present disclosure.
  • the electronic device includes one or more processors, a communication unit, and the like.
  • the one or more processors are, for example, one or more central processing units (CPUs), and / or one or more images.
  • the processor may perform various appropriate actions and processes according to executable instructions stored in a read-only memory (ROM) or executable instructions loaded from a storage portion into a random access memory (RAM) .
  • the communication unit may include, but is not limited to, a network card.
  • the network card may include, but is not limited to, an IB (Infiniband) network card.
  • the processor may communicate with a read-only memory and / or a random access memory to execute executable instructions, and is connected to the communication unit through a bus. And communicate with other target devices via the communication unit, thereby completing operations corresponding to any of the lane line-based intelligent driving control methods provided in the embodiments of the present disclosure, for example, obtaining a lane line detection result of a vehicle driving environment; according to the vehicle ’s A driving state and a lane line detection result to determine an estimated distance for the vehicle to exit the lane line and / or an estimated time for the vehicle to exit the lane line; based on the estimated distance and / or the estimated time, Perform intelligent driving control on the vehicle.
  • various programs and data required for the operation of the device can be stored in the RAM.
  • the CPU, ROM, and RAM are connected to each other through a bus.
  • ROM is an optional module.
  • the RAM stores executable instructions, or writes executable instructions to ROM at runtime, and the executable instructions cause the processor to perform operations corresponding to any of the lane line-based intelligent driving control methods of the present disclosure.
  • An input / output (I / O) interface is also connected to the bus.
  • the communication unit can be integrated or set to have multiple sub-modules (for example, multiple IB network cards) and be on the bus link.
  • the following components are connected to the I / O interface: including input parts such as keyboard, mouse, etc .; including output parts such as cathode ray tube (CRT), liquid crystal display (LCD), etc .; speakers; storage parts including hard disks; etc .; LAN card, modem, and other network interface card communication part.
  • the communication section performs communication processing via a network such as the Internet.
  • the drive is also connected to the I / O interface as required. Removable media, such as magnetic disks, optical disks, magneto-optical disks, semiconductor memories, etc., are installed on the drive as needed, so that a computer program read therefrom is installed into the storage section as needed.
  • FIG. 7 is only an optional implementation manner. In practice, the number and types of the components in FIG. 7 may be selected, deleted, added, or replaced according to actual needs. Functional settings can also be implemented by separate settings or integrated settings. For example, the GPU and CPU can be set separately or the GPU can be integrated on the CPU. The communication department can be set separately or integrated on the CPU or GPU. Wait. These alternative embodiments all fall within the protection scope of the present disclosure.
  • an embodiment of the present disclosure also provides a computer storage medium for storing computer-readable instructions that, when executed, implement operations of the lane line-based intelligent driving control method of any of the foregoing embodiments of the present disclosure.
  • an embodiment of the present disclosure also provides a computer program including computer-readable instructions.
  • a processor in the device executes the instructions to implement any of the foregoing in the present disclosure.
  • the methods and apparatus of the present disclosure may be implemented in many ways.
  • the methods and apparatuses of the present disclosure may be implemented by software, hardware, firmware or any combination of software, hardware, firmware.
  • the above order of the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order described above unless specifically stated otherwise.
  • the present disclosure may also be implemented as programs recorded in a recording medium, which programs include machine-readable instructions for implementing a method according to the present disclosure.
  • the present disclosure also covers a recording medium storing a program for executing a method according to the present disclosure.

Abstract

一种基于车道线的智能驾驶控制方法和装置、电子设备,其中,方法包括:(S102)获取车辆行驶环境的车道线检测结果;(S104)根据所述车辆的行驶状态和车道线检测结果,确定所述车辆驶出所述车道线的估计距离和/或所述车辆驶出所述车道线的估计时间;(S106)根据所述估计距离和/或所述估计时间,对所述车辆进行智能驾驶控制。该方法实现了基于车道线对车辆行驶状态的智能控制,有助于提高驾驶安全性。

Description

基于车道线的智能驾驶控制方法和装置、电子设备
本公开要求在2018年05月31日提交中国专利局、申请号为CN201810551908.X、发明名称为“基于车道线的智能驾驶控制方法和装置、电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
背景技术
车道线检测是自动驾驶和辅助驾驶中的一项关键技术,通过该项技术可以检测车辆行驶道路上的车道线,从而判断车辆的当前位置,为下一步的预警提供关键信息。
发明内容
本公开实施例提供一种基于车道线的智能驾驶控制技术方案。
本公开实施例提供的一种基于车道线的智能驾驶控制方法,包括:
获取车辆行驶环境的车道线检测结果;
根据所述车辆的行驶状态和车道线检测结果,确定所述车辆驶出所述车道线的估计距离和/或所述车辆驶出所述车道线的估计时间;
根据所述估计距离和/或所述估计时间,对所述车辆进行智能驾驶控制。
根据本公开实施例的另一个方面,提供的一种基于车道线的智能驾驶控制装置,包括:
获取模块,用于获取车辆行驶环境的车道线检测结果;
确定模块,用于根据所述车辆的行驶状态和车道线检测结果,确定所述车辆驶出所述车道线的估计距离和/或所述车辆驶出所述车道线的估计时间;
控制模块,用于根据所述估计距离和/或所述估计时间,对所述车辆进行智能驾驶控制。
根据本公开实施例的再一个方面,提供的一种电子设备,包括:
存储器,用于存储计算机程序;
处理器,用于执行所述存储器中存储的计算机程序,且所述计算机程序被执行时,实现上述本公开上述任一实施例所述的方法。
根据本公开实施例的再一个方面,提供的一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时,实现上述本公开上述任一实施例所述的方法。
根据本公开实施例的再一个方面,提供的一种计算机程序,包括计算机指令,当所述计算机指令在设备的处理器中运行时,实现上述本公开上述任一实施例所述的方法。
根据本公开实施例的再一个方面,提供的一种计算机程序产品,用于存储计算机可读指令,所述指令被执行时使得计算机执行上述任一可能的实现方式中所述的人体关键点检测方法。
在一个可选实施方式中,所述计算机程序产品为计算机存储介质,在另一个可选实施方式中,所述计算机程序产品为软件产品,例如软件开发包(Software Development Kit,SDK),等等。
基于本公开上述实施例提供的基于车道线的智能驾驶控制方法和装置、电子设备、程序和介质,获取车辆行驶环境的车道线检测结果,根据车辆的行驶状态和车道线检测结果,确定车辆驶出车道线的估计距离和/或车辆驶出车道线的估计时间,根据估计距离和/或估计时间,对车辆进行智能驾驶控制,由此,本公开实施例实现了基于车道线对车辆行驶状态的智能控制,有助于提高驾驶安全性。
下面通过附图和实施例,对本公开的技术方案做进一步的详细描述。
附图说明
构成说明书的一部分的附图描述了本公开的实施例,并且连同描述一起用于解释本公开的原理。
参照附图,根据下面的详细描述,可以更加清楚地理解本公开,其中:
图1为本公开基于车道线的智能驾驶控制方法一个实施例的流程图。
图2为本公开基于车道线的智能驾驶控制方法另一个实施例的流程图。
图3为本公开基于车道线的智能驾驶控制方法又一个实施例的流程图。
图4为本公开实施例中的两条车道线示例。
图5为本公开基于车道线的智能驾驶控制装置一个实施例的结构示意图。
图6为本公开基于车道线的智能驾驶控制装置另一个实施例的结构示意图。
图7为本公开电子设备一个应用实施例的结构示意图。
具体实施方式
现在将参照附图来详细描述本公开的各种示例性实施例。应注意到:除非另外说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本公开的范围。
还应理解,在本公开实施例中,“多个”可以指两个或两个以上,“至少一个”可以指一个、两个或两个以上。
本领域技术人员可以理解,本公开实施例中的“第一”、“第二”等术语仅用于区别不同步骤、设备或模块等,既不代表任何特定技术含义,也不表示它们之间的必然逻辑顺序。
还应理解,对于本公开实施例中提及的任一部件、数据或结构,在没有明确限定或者在前后文给出相反启示的情况下,一般可以理解为一个或多个。
还应理解,本公开对各个实施例的描述着重强调各个实施例之间的不同之处,其相同或相似之处可以相互参考,为了简洁,不再一一赘述。
同时,应当明白,为了便于描述,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本公开及其应用或使用的任何限制。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
另外,公开中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本公开中字符“/”,一般表示前后关联对象是一种“或”的关系。
本公开实施例可以应用于终端设备、计算机系统、服务器等电子设备,其可与众多其它通用或专用计算系统环境或配置一起操作。适于与终端设备、计算机系统、服务器等电子设备一起使用的众所周知的终端设备、计算系统、环境和/或配置的例子包括但不限于:个人计算机系统、服务器计算机系统、瘦客户机、厚客户机、手持或膝上设备、基于微处理器的系统、机顶盒、可编程消费电子产品、网络个人电脑、小型计算机系统﹑大型计算机系统和包括上述任何系统的分布式云计算技术环境,等等。
终端设备、计算机系统、服务器等电子设备可以在由计算机系统执行的计算机系统可执行指令(诸如程序模块)的一般语境下描述。通常,程序模块可以包括例程、程序、目 标程序、组件、逻辑、数据结构等等,它们执行特定的任务或者实现特定的抽象数据类型。计算机系统/服务器可以在分布式云计算环境中实施,分布式云计算环境中,任务是由通过通信网络链接的远程处理设备执行的。在分布式云计算环境中,程序模块可以位于包括存储设备的本地或远程计算系统存储介质上。
图1为本公开基于车道线的智能驾驶控制方法一个实施例的流程图。如图1所示,该实施例的基于车道线的智能驾驶控制方法包括:
102,获取车辆行驶环境的车道线检测结果。
在其中一些可选示例中,例如可以通过如下方式获取车辆行驶环境中的车道线检测结果:基于神经网络检测车辆行驶环境中的车道线,例如:通过神经网络对包括所述车辆行驶环境的图像进行车道线检测,得到车道线检测结果;或者,直接从高级驾驶辅助系统(ADAS)获取车辆行驶环境中的车道线检测结果,即直接利用ADAS中的车道线检测结果;
在一个可选示例中,该操作102可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的获取模块执行。
104,根据车辆的行驶状态和车道线检测结果,确定该车辆驶出车道线的估计距离和/或车辆驶出车道线的估计时间。
在一个可选示例中,该操作104可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的确定模块执行。
106,根据上述估计距离和/或估计时间,对该车辆进行智能驾驶控制。
在其中一些实施方式中,对车辆进行的智能驾驶控制,例如可以包括但不限于对车辆进行如下至少一项控制:自动驾驶控制,辅助驾驶控制,等等。
其中,对车辆的自动驾驶控制,例如可以包括但不限于对车辆进行如下以下任意一项或多项控制:制动、减速、改变行驶方向、车道线保持、驾驶模式切换控制(例如,从自动驾驶模式切换为非自动驾驶模式,从非自动驾驶模式切换为自动驾驶模式),等等控制车辆驾驶状态的操作。其中,驾驶模式切换控制可以控制车辆从自动驾驶模式切换为非自动驾驶模式(如:手动驾驶模式)、或者从非自动驾驶模式切换为自动驾驶模式。
对车辆的辅助驾驶控制,例如可以包括但不限于对车辆进行如下以下任意一项或多项控制:进行车道线偏离预警,进行车道线保持提示,等等有助于提示驾驶员控制车辆驾驶状态的操作。
在一个可选示例中,该操作106可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的控制模块执行。
基于本公开上述实施例提供的基于车道线的智能驾驶控制方法,获取车辆行驶环境的车道线检测结果,根据车辆的行驶状态和车道线检测结果,确定车辆驶出车道线的估计距离和/或车辆驶出车道线的估计时间,根据估计距离和/或估计时间,对车辆进行自动驾驶或辅助驾驶等智能驾驶控制,由此,本公开实施例实现了基于车道线对车辆行驶状态的智能控制,以期降低或避免车辆驶出车道线出现交通事故,有助于提高驾驶安全性。
图2为本公开基于车道线的智能驾驶控制方法另一个实施例的流程图。如图2所示,该实施例的基于车道线的智能驾驶控制方法包括:
202,通过神经网络对包括车辆行驶环境的图像进行语义分割,输出车道线概率图。
其中,车道线概率图用于表示图像中的至少一个像素点分别属于车道线的概率值。
本公开实施例中的神经网络可以是深度神经网络,例如卷积神经网络,可以预先通过样本图像和预先标注的、准确的车道线概率图对神经网络进行训练得到。其中,通过样本图像和准确的车道线概率图对神经网络进行训练,例如可以通过如下方式实现:通过神经网络对样本图像进行语义分割,输出预测车道线概率图;根据预测车道线概率图与准确的车道线概率图在对应的至少一个像素点之间的差异,获取神经网络的损失函数值,基于该损失函数值对神经网络进行训练,例如基于梯度更新训练方法,通过链式法则反传梯度,对神经网络中各网络层参数的参数值进行调整,直至满足预设条件,例如,预测车道线概率图与准确的车道线概率图在对应的至少一个像素点之间的差异小于预设差值、和/或对神经网络的训练次数达到预设次数,得到训练好的神经网络。
在一个可选示例中,该操作202可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的检测单元或者检测单元中的神经网络执行。
204,根据车道线概率图确定车道线所在区域,作为车道线检测结果。
在一个可选示例中,该操作204可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的检测单元或者检测单元中的确定子单元执行。
206,根据车辆的行驶状态和车道线检测结果,确定该车辆驶出车道线的估计距离和/或车辆驶出车道线的估计时间。
在一个可选示例中,该操作206可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的确定模块执行。
208,根据上述估计距离和/或估计时间,对该车辆进行智能驾驶控制。
在一个可选示例中,该操作208可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的控制模块执行。
基于本实施例通过神经网络对图像进行语义分割,输出车道线概率图,根据该车道线概率图确定车道线所在区域。由于神经网络可以基于深度学习的方式,通过学习大量标注过的车道线图像,例如弯道、车道线缺失、路牙边缘、光线昏暗、逆光等场景下的车道线图像,自动学习到车道线的各种特征,无需人工手动设计特征,简化了流程,并且降低了人工标注成本;另外可以在各种驾驶场景中有效识别出车道线,实现对弯道、车道线缺失、路牙边缘、光线昏暗、逆光等各种复杂场景下的车道线检测,提升了车道线检测的精度,以便获取精确的估计距离和/或估计时间,从而有助于提升智能驾驶控制的准确性,提高驾驶的安全性。
可选地,在本公开基于车道线的智能驾驶控制方法的另一个实施例中,在上述操作202之前,还可以包括:对包括车辆行驶环境的原始图像进行预处理,得到上述包括车辆行驶环境的图像。相应地,操作202中,通过神经网络,对预处理得到的上述图像进行语义分割。
其中神经网络对原始图像的预处理,例如可以是对摄像头采集的原始图像进行缩放、裁剪等,将原始图像缩放、裁剪为预设尺寸的图像,输入神经网络进行处理,以降低神经网络对图像进行语义分割的复杂度、降低耗时,提高处理效率。
另外,神经网络对原始图像的预处理,还可以是按照预设图像质量(例如图像清晰度、曝光等)标准,从摄像头采集的原始图像中选取一些质量可选的图像,输入神经网络进行处理,从而提高语义分割的准确性,以便提高车道线检测的准确率。
在其中一些实施方式中,操作202中通过神经网络对包括车辆行驶环境的图像进行语 义分割,输出车道线概率图,可以包括:
通过神经网络对图像进行特征提取,得到特征图;
通过神经网络对该特征图进行语义分割,得到N条车道线的车道线概率图。其中,每条车道的车道线概率图中各像素点的像素值用于表示图像中对应像素点分别属于该条车道线的概率值,N的取值为大于0的整数。例如,在一些可选示例中,N的取值为4。
本公开各实施例中的神经网络,可以包括:用于特征提取的网络层和用于分类的网络层。其中,用于特征提取的网络层例如可以包括:卷积层,批归一化(Batch Normalization,BN)层和非线性层。依次通过卷积层、BN层和非线性层对图像进行特征提取,会产生特征图;通过用于分类的网络层对特征图进行语义分割,会得到多条车道线的车道线概率图。
其中,上述N条车道线的车道线概率图可以是一个通道的概率图,该概率图中的各像素点的像素值分别表示图像中对应像素点属于车道线的概率值。另外,上述N条车道线的车道线概率图也可以是一个N+1个通道的概率图,该N+1个通道分别对应于N条车道线和背景,即,N+1个通道的概率图中各通道的概率图分别表示上述图像中至少一个像素点分别属于该通道对应的车道线或者背景的概率。
在其中一些可选示例中,通过神经网络对特征图进行语义分割,得到N条车道线的车道线概率图,可以包括:
通过神经网络对上述特征图进行语义分割,得到N+1个通道的概率图。其中,该N+1个通道分别对应于N条车道线和背景,即,N+1个通道的概率图中各通道的概率图分别表示上述图像中至少一个像素点分别属于该通道对应的车道线或者背景的概率;
从N+1个通道的概率图中获取N条车道线的车道线概率图。
本公开实施例中的神经网络可以包括:用于特征提取的网络层、用于分类的网络层、以及归一化(Softmax)层。依次通过用于特征提取的各网络层对图像进行特征提取,产生一系列的特征图;通过用于分类的网络层对最终输出的特征图进行语义分割,得到N+1个通道的车道线概率图;利用Softmax层对N+1个通道的车道线概率图进行归一化处理,将车道线概率图中各像素点的概率值转化为0~1范围内的数值。
在本公开实施例中,用于分类的网络层可以对特征图中的各像素点进行多分类,例如,对于4条车道线(称为:左左车道线,左车道线,右车道线和右右车道线)的场景,可以对特征图中的各像素点进行五分类,识别特征图中的各像素点分别属于五种类别(背景,左左车道线,左车道线,右车道线和右右车道线)的概率值,并分别输出特征图中的各像素点属于其中一种类型的概率图,得到上述N+1个通道的概率图,每个概率图中各像素的概率值表示该像素对应的图像中像素属于某一类别的概率值。
上述实施例中,N为车辆行驶环境中车道线的条数,可以是任意大于0的整数值。例如,N的取值为2时,N+1个通道分别对应于车辆行驶环境中的背景、左车道线和右车道线;或者,N的取值为3时,N+1个通道分别对应于车辆行驶环境中的背景、左车道线、中车道线和右车道线;或者,N的取值为4时,N+1个通道分别对应于车辆行驶环境中的背景、左左车道线、左车道线、右车道线和右右车道线。
在其中一些实施方式中,操作204中根据一条车道线的车道线概率图确定车道线所在区域,可以包括:
从上述车道线概率图中选取概率值大于第一预设阈值的像素点;
基于选取出的像素点在车道线概率图中进行最大连通域查找,找出属于该车道线的像 素点集合;
基于上述属于车道线的像素点集合确定该车道线所在区域。
示例性地,可以采用广度优先搜索算法进行最大连通域查找,找出所有概率值大于第一预设阈值的连通区域,然后比较所有的连通区域的最大区域,作为检测出的车道线所在区域。
神经网络的输出为多条车道线的车道线概率图,车道线概率图中各像素点的像素值表示对应图像中像素点属于某条车道线的概率值,其值可以是归一化后0-1之间的一个数值。通过第一预设阈值选取出车道线概率图中大概率属于该车道线概率图所属车道线的像素点,然后执行最大连通域查找,找出属于该车道线的像素点集合,作为该车道线所在区域。针对每一条车道线分别执行上述操作,即可确定各条车道线所在区域。
在其中一些可选示例中,上述基于属于车道线的像素点集合确定该车道线所在区域,可以包括:
统计属于该车道线的像素点集合中所有像素点的概率值之和,得到该车道线的置信度;
若该置信度大于第二预设阈值,以上述像素点集合形成的区域作为该车道线所在区域。
本公开实施例中,对于每条车道线,统计像素点集合中所有像素点的概率值之和,得到该条车道线的置信度。其中的置信度,为由像素点集合形成的区域是真实存在的车道线的概率值。其中,第二预设阈值为根据实际需求设置的经验值,可以根据实际场景进行调整。如果置信度太小,即不大于第二预设阈值,表示该车道线不存在,丢弃确定的该车道线;如果置信度较大,即大于第二预设阈值,表示确定的车道线所在区域是真实存在的车道线的概率值较高,确定作为该车道线所在区域。
图3为本公开基于车道线的智能驾驶控制方法另一个实施例的流程图。如图3所示,该实施例的基于车道线的智能驾驶控制方法包括:
302,通过神经网络对包括车辆行驶环境的图像进行语义分割,输出车道线概率图。
其中,车道线概率图用于表示图像中的至少一个像素点分别属于车道线的概率值。
在一个可选示例中,该操作302可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的检测单元或者检测单元中的神经网络执行。
304,根据车道线概率图确定车道线所在区域,作为车道线检测结果。
在一个可选示例中,该操作304可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的检测单元或者检测单元中的确定子单元执行。
306,分别对每条车道线所在区域中的像素点进行曲线拟合,得到每条车道线的车道线信息。
其中,该车道线信息包括车道线上至少一点(例如车道线上的各点)到车辆的距离。其中的车道线信息的表现形式有多种,例如可以是一条曲线、直线、包括车道线上至少一点及其到车辆的距离的离散图,也可是一个数据表,或者还可以表示为一个方程,等等,本公开实施例不限定车道线信息的表现形式。
车道线信息表示为一个方程时,可以称为车道线方程。在其中一些可选示例中,车道线方程可以是一个二次曲线方程,可以表示为:x=a*y*y+b*y+c。该车道线方程中具有三个参数(a,b,c)。如图4所示,其中的两条曲线为两条车道线方程对应的两条车道线。其中,Y -max表示车道线所在地面上的一点到车辆正前方竖直方向的最大距离,Y -min表示车道线所在地面上的一点到车辆正前方竖直方向的最小距离。
在一个可选示例中,该操作306可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的确定模块或者确定模块中的拟合处理单元执行。
308,根据车辆的行驶状态和车道线的车道线信息,确定该车辆驶出相应车道线的估计距离和/或车辆驶出车道线的估计时间。
在一个可选示例中,该操作308可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的确定模块或者确定模块中的确定单元执行。
310,根据上述估计距离和/或估计时间,对该车辆进行智能驾驶控制。
在一个可选示例中,该操作310可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的控制模块执行。
本公开实施例在确定车道线所在区域后,通过对每条车道线所在区域中的像素点进行曲线拟合得到每条车道线的车道线信息,并基于车辆的行驶状态和车道线的车道线信息确定该车辆驶出相应车道线的估计距离和/或车辆驶出车道线的估计时间,由于进行曲线拟合得到的车道线信息可以表现为二次曲线或者类似表示方式,可以较好的贴合弯道车道线,对于弯道仍然有良好的适用性,可以适用于各种道路情况的预警。
在其中一些实施方式中,操作306中,对一条车道线所在区域中的像素点进行曲线拟合,得到该条车道线的车道线信息,可以包括:
从一条车道线所在区域中选取多个(例如三个或以上)像素点;
将选取的多个像素点从摄像头所在的相机坐标系转换到世界坐标系中,得到上述多个像素点在世界坐标系中的坐标。其中,世界坐标系的原点可以根据需求设定,例如可以设置原点为车辆左前轮着地点,世界坐标系的中的y轴方向为车辆正前方方向;
根据上述多个像素点在世界坐标系中的坐标,在世界坐标系中对上述多个像素点进行曲线拟合,得到上述一条车道线的车道线信息。
例如,可以一条车道线所在区域中随机挑选出一部分像素点,根据相机标定参数(也可以称为摄像机标定参数),将这些像素点转换到世界坐标系下,然后在世界坐标系下对这些像素点进行曲线拟合,便可得到拟合曲线。根据此拟合曲线,可以计算出上述车道线上述任意一点到车辆的距离,即根据拟合曲线x=a*y*y+b*y+c,可以计算车道线所在地面上任意一点x到车辆正前方竖直方向的距离y,以及前方道路上的车道划分情况,其中,a、b、c为拟合曲线中的参数。其中的相机标定参数,可以包括内参和外参。其中,基于外参可以确定相机或摄像机在世界坐标系中的位置和朝向,外参可以包括旋转矩阵和平移矩阵,旋转矩阵和平移矩阵共同描述了如何把点从世界坐标系转换到相机坐标系或者反之;内参是与相机自身特性相关的参数,例如相机的焦距、像素大小等。
其中的曲线拟合是指,通过一些离散点计算出这些点构成的曲线。在本公开实施例的一些可选示例中,例如可以采用最小二乘法基于上述多个像素点进行曲线拟合。
另外,在本公开基于车道线的智能驾驶控制方法的又一个实施例中,为了防止基于两帧图像确定的车道线抖动和车辆换道过程中车道线产生混乱情况,通过操作306得到车道线的车道线信息之后,还可以包括:对车道线的车道线信息中的参数进行滤波,以滤除抖动和一些异常情况,确保车道线信息的稳定性。相应地,操作308中,根据车辆的行驶状态和滤波得到的车道线的车道线信息,确定车辆驶出相应车道线的估计距离和/或车辆驶出车道线的估计时间。在其中一些实施方式中,对一条车道线的车道线信息中的参数进行滤波,可以包括:
根据该条车道线的车道线信息中参数的参数值与基于上一帧图像获得的该车道线的历史车道线信息中参数的参数值,对该条车道线信息中参数的参数值进行卡尔曼(kalman)滤波。其中,上一帧图像为上述图像所在视频中检测时序位于该图像之前的一帧图像,例如可以是该图像相邻的前一帧图像,也可以是检测时序位于该图像之前、间隔一帧或多帧的图像。
卡尔曼滤波是一种根据时变随机信号的统计特性,对信号的未来值做出尽可能接近真值的一种估计方法。本实施例中根据该条车道线的车道线信息中参数的参数值与基于上一帧图像获得的该车道线的历史车道线信息中参数的参数值,对该条车道线信息中参数的参数值进行卡尔曼滤波,可以提高条车道线信息的准确性,有助于后续精确的确定车辆与车道线之间的距离等信息,以便对车辆偏离车道线进行准确预警。
进一步地,在本公开基于车道线的智能驾驶控制方法的再一个实施例中,对车道线信息中参数的参数值进行卡尔曼滤波之前,还可以包括:针对同一条车道线,选取车道线信息中参数的参数值相对于历史车道线信息中对应参数的参数值有变化、且车道线信息中参数的参数值与历史车道线信息中对应参数的参数值之间的差值小于第三预设阈值的车道线信息,以作为有效的车道线信息进行卡尔曼滤波,即对车道线信息中的参数(例如x=a*y*y+b*y+c中的三个参数(a,b,c))进行平滑。由于视频中基于每帧图像拟合出的车道线信息中的参数都会变化,但相邻帧图像的不会变化太大,因此可以对当前帧图像的车道线信息进行一些平滑,滤除抖动和一些异常情况,确保车道线信息稳定性。
例如,在其中一些实施方式中,可以对视频中参与车道线检测的首帧图像确定出的车道线,分别为每一条车道线建立一个跟踪器来跟踪该车道线,如果当前帧图像检测到同一条车道线,并且该车道线的车道线信息相对于上一帧图像确定出的同一条车道线的车道线信息中参数值之间的差值小于第三预设阈值,则将当前帧图像的车道线信息中的参数值更新到上一帧图像确定出的同一条车道线的跟踪器中,以对当前帧图像中该同一条车道线的车道线信息进行卡尔曼滤波。如果同一条车道线的跟踪器在连续两帧图像中都有更新,说明该条车道线的确定结果较准确,可确认该条车道线的跟踪器,将该跟踪器跟踪的车道线设置为最终的车道线结果。
如果跟踪器连续若干帧都没有更新,则认为相应的车道线消失,删除该跟踪器。
如果从当前帧图像中没有检测到与上一帧图像相匹配的车道线,说明上一帧图像中确定的该条车道线误差较大,删除上一帧图像中的该跟踪器。
在任一实施例的其中一些实施方式中,操作308中,根据车辆的行驶状态和车道线检测结果,确定车辆驶出车道线的估计距离,可以包括:
根据该车辆在世界坐标系中的位置、以及车道线的车道线信息,确定该车辆与相应车道线之间的估计距离;该实施例中,车辆的行驶状态包括该车辆在世界坐标系中的位置。
类似地,操作308中,根据车辆的行驶状态和车道线检测结果,确定车辆驶出车道线的估计时间,可以包括:
根据车辆的速度和车辆在世界坐标系中的位置、以及车道线的车道线信息,确定车辆驶出车道线的估计时间;车辆的行驶状态包括车辆的速度和车辆在世界坐标系中的位置。
在任一实施例的其中一些实施方式中,根据上述估计距离和/或估计时间,对该车辆进行智能驾驶控制,可以包括:
将估计距离和/或估计时间与至少一预定阈值进行比较;
在比较结果满足一个或多个预设条件时,进行所满足的预设条件相应的智能驾驶控制,例如进行所满足的预设条件相应的自动驾驶控制和/或辅助驾驶控制。
其中,在预设条件包括多个时,该多个预设条件分别对应的智能驾驶控制的程度可以逐级递增。本实施例中,多个预设条件分别对应的智能驾驶控制的程度可以逐级递增,可以根据车辆驶出车道线的估计距离和/或估计时间的不同,采取相应的智能驾驶控制手段对车辆进行相应自动驾驶控制和/或辅助驾驶控制,可以在不干涉正常驾驶的情况下有效避免车辆驶出车道线出现交通事故,提高驾驶的安全性。
例如,在比较结果满足一个或多个预设条件时,进行所满足的预设条件相应的智能驾驶控制时,在其中一些可选示例中,可以包括:
若估计距离小于或等于第四预设阈值、且大于第五预设阈值,对车辆进行车道线偏离提示,例如,提醒车辆已偏离当前车道、将驶出当前车道线等等;或者,
若估计时间小于或等于第六预设阈值、且大于第七预设阈值,对车辆进行车道线偏离提示;或者,
若估计距离小于或等于第四预设阈值、且大于第五预设阈值,且估计时间小于或等于第六预设阈值、且大于第七预设阈值,对车辆进行车道线偏离提示;
其中,车道线偏离预警包括车道线偏离提示。第四预设阈值和第五预设阈值的取值分别大于0,且第五预设阈值小于第四预设阈值,例如,第四预设阈值和第五预设阈值的取值分别为5秒、3秒。第六预设阈值和第七预设阈值的取值分别大于0,且第七预设阈值小于第六预设阈值,例如,第六预设阈值和第七预设阈值的取值分别为5米、3米。
在车辆与到车道线的估计距离小于或等于第四预设阈值、且大于第五预设阈值,或者,车辆预计驶出车道线的估计时间小于或等于第六预设阈值、且大于第七预设阈值时,对车辆进行车道线偏离提示,可以提醒驾驶员注意到车辆偏移车道线、以便及时采取相应驾驶措施,避免车辆驶出车道线,提高驾驶安全性。在结合车辆与到车道线的估计距离和预计驶出车道线的估计时间进行车道线偏离提示,提高了车道线偏离预警的准确率。在进一步的可选示例中,还可以包括:
若估计距离小于或等于第五预设阈值,对车辆进行自动驾驶控制和/或车道线偏离报警;或者,
若估计时间小于或等于第七预设阈值,对车辆进行自动驾驶控制和/或车道线偏离报警;或者,
若估计距离小于或等于第五预设阈值,且估计时间小于或等于第七预设阈值,对车辆进行自动驾驶控制和/或车道线偏离报警;
其中,车道线偏离预警包括车道线偏离报警,该车道线偏离警报例如可以是以声、光、电等方式进行报警。
在上述实施方式中,随着评估距离和/或评估时间的逐渐变小,分别对应的智能驾驶控制的程度逐级递增,从对车辆进行车道线偏离提示、到对车辆进行自动驾驶控制和/或车道线偏离报警,以避免车辆驶出车道线,提高驾驶的安全性。
在更进一步的可选示例中,可以在基于图像以及历史帧图像确定出的估计距离均小于或等于第五预设阈值,对车辆进行自动驾驶控制和/或车道线偏离报警,其中,该历史帧图像包括图像所在视频中检测时序位于图像之前的至少一帧图像;或者,在基于图像以及历史帧图像确定出的估计时间均小于或等于第七预设阈值时,对车辆进行自动驾驶控制和/ 或车道线偏离报警;或者,在基于图像以及历史帧图像确定出的估计距离均小于或等于第五预设阈值、且基于图像以及历史帧图像确定出的估计时间均小于或等于第七预设阈值时,对车辆进行自动驾驶控制和/或车道线偏离报警。
本实施例同时统计历史帧图像的评估距离和/或评估时间,作为对车辆进行自动驾驶控制和/或车道线偏离报警的依据,可以提高对车辆进行自动驾驶控制和/或车道线偏离报警的的准确性。
例如,在一个应用实例中,假设车辆当前位置为A,沿着当前行驶方向与一条车道线(假设称为目标车道线)的交点位置为B,那么线段AB即为车辆在当前状态下将驶出该目标车道线的轨迹。根据相机标定参数可以获取车辆在世界坐标系中的绝对位置A’,然后根据该目标车道线的车道线方程,可以计算得出车道线行驶方向的直线A’B与该目标车道线的交点位置B,从而得出直线A’B的长度。再根据车辆的当前行驶速度,可以计算出该车辆驶出该目标车道线的时间t。统计历史帧图像信息,如果该车辆在若干帧图像中即将驶出该目标车道线时间都过短(小于第七预设阈值),同时该车辆距离该目标车道线的距离A’B过短(小于第五预设阈值),则进行自动驾驶控制和/或车道线偏离报警,例如对该车辆进行减速、同时通过声音报警。同时统计历史帧图像信息可以计算出该车辆在当前时刻的侧向速度,再根据该车辆当前距离该目标车道线的距离,可以计算得到当前时刻车辆距离该目标车道线的压线时间(即到达该目标车道线的时间),作为是否对该车辆进行自动驾驶控制和/或车道线偏离报警的依据。
其中,车辆与目标车道线之间的距离,可以根据该目标车道线的车道线方程坐标原点的设定、以及车辆行驶方向、车辆宽度获取。例如,如果车道线方程坐标原点设定为车辆的左车轮,目标车道线在该车辆的左侧,则直接获取该车辆与其行驶方向与目标车道线的交点之间的距离即可。如果车道线方程坐标原点设定为车辆的右车轮,目标车道线在该车辆的左侧,则获取该车辆与其行驶方向与目标车道线的交点之间的距离、加上车辆宽度投影在其行驶方向上的有效宽度,即为车辆与目标车道线之间的距离。如果车道线方程坐标原点设定为车辆的中心,目标车道线在该车辆的左侧,则获取该车辆与其行驶方向与目标车道线的交点之间的距离、加上车辆的一半宽度投影在其行驶方向上的有效宽度,即为车辆与目标车道线之间的评估距离。
本公开实施例可以应用于自动驾驶和辅助驾驶场景中,实现精准的车道线检测、自动驾驶控制和车辆偏离车道线预警。
本公开实施例提供的任一种基于车道线的智能驾驶控制方法可以由任意适当的具有数据处理能力的设备执行,包括但不限于:终端设备和服务器等。或者,本公开实施例提供的任一种基于车道线的智能驾驶控制方法可以由处理器执行,如处理器通过调用存储器存储的相应指令来执行本公开实施例提及的任一种基于车道线的智能驾驶控制方法。下文不再赘述。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
图5为本公开基于车道线的智能驾驶控制装置一个实施例的结构示意图。该实施例基于车道线的智能驾驶控制装置可用于实现本公开任一基于车道线的控制方法实施例。如图 5所示,该实施例基于车道线的智能驾驶控制装置包括:获取模块,确定模块和控制模块。其中:
获取模块,用于获取车辆行驶环境的车道线检测结果。
确定模块,用于根据车辆的行驶状态和车道线检测结果,确定车辆驶出车道线的估计距离和/或车辆驶出车道线的估计时间。
控制模块,用于根据估计距离和/或估计时间,对车辆进行智能驾驶控制。
基于本公开上述实施例提供的基于车道线的智能驾驶控制装置,获取车辆行驶环境的车道线检测结果,根据车辆的行驶状态和车道线检测结果,确定车辆驶出车道线的估计距离和/或车辆驶出车道线的估计时间,根据估计距离和/或估计时间,对车辆进行智能驾驶控制,由此,本公开实施例实现了基于车道线对车辆行驶状态的智能控制,以期将车辆保持在车道线内行驶,降低或避免车辆驶出车道线出现交通事故,提高驾驶安全性。
在其中一些实施方式中,获取模块可以包括:检测单元,用于基于神经网络检测车辆行驶环境的车道线,得到车道线检测结果;或者,获取单元,用于从高级驾驶辅助系统获取车辆行驶环境的车道线检测结果。
在其中一些实施方式中,检测单元可以包括:神经网络,用于对包括车辆行驶环境的图像进行语义分割,输出车道线概率图;车道线概率图用于表示图像中的至少一个像素点分别属于车道线的概率值;确定子单元,用于根据车道线概率图确定车道线所在区域;车道线检测结果包括车道线所在区域。
在其中一些可选示例中,神经网络用于:通过神经网络对图像进行特征提取,得到特征图;以及通过神经网络对特征图进行语义分割,得到N条车道线的车道线概率图;每条车道的车道线概率图中各像素点的像素值表示图像中对应像素点分别属于该条车道线的概率值,N的取值为大于0的整数。
其中,神经网络对特征图进行语义分割,得到N条车道线的车道线概率图时,用于:通过神经网络对特征图进行语义分割,得到N+1个通道的概率图;N+1个通道分别对应于N条车道线和背景;以及从N+1个通道的概率图中获取N条车道线的车道线概率图。
在其中一些可选示例中,N的取值为2,N+1个通道分别对应于背景、左车道线和右车道线;或者,N的取值为3,N+1个通道分别对应于背景、左车道线、中车道线和右车道线;或者,N的取值为4,N+1个通道分别对应于背景、左左车道线、左车道线、右车道线和右右车道线。
在其中一些可选示例中,确定子单元用于:从车道线的车道线概率图中选取概率值大于第一预设阈值的像素点;基于选取出的像素点在车道线概率图中进行最大连通域查找,找出属于车道线的像素点集合;以及基于属于车道线的像素点集合确定车道线所在区域。
例如,确定子单元基于属于车道线的像素点集合确定车道线所在区域时,用于:统计属于车道线的像素点集合中所有像素点的概率值之和,得到车道线的置信度;若置信度大于第二预设阈值,以像素点集合形成的区域作为车道线所在区域。
图6为本公开基于车道线的智能驾驶控制装置一个实施例的结构示意图。如图6所示,与图5所示的实施例相比,该实施例基于车道线的智能驾驶控制装置还包括:预处理模块,用于对包括车辆行驶环境的原始图像进行预处理。相应地,该实施例中,神经网络对包括车辆行驶环境的图像进行语义分割时,用于对预处理得到的图像进行语义分割。
在其中一些实施方式中,确定模块可以包括:拟合处理单元,用于分别对每条车道线 所在区域中的像素点进行曲线拟合,得到每条车道线的车道线信息;车道线信息包括车道线上至少一点到车辆的距离;确定单元,用于根据车辆的行驶状态和车道线的车道线信息,确定车辆驶出车道线的估计距离和/或车辆驶出车道线的估计时间。
在其中一些可选示例中,拟合处理单元,用于:从一条车道线所在区域中选取多个像素点;将多个像素点从摄像头所在的相机坐标系转换到世界坐标系中,得到多个像素点在世界坐标系中的坐标;以及根据多个像素点在世界坐标系中的坐标,在世界坐标系中对多个像素点进行曲线拟合,得到车道线的车道线信息。
另外,在另外一些实施方式中,确定模块还可以包括:滤波单元,用于对车道线的车道线信息中的参数进行滤波。相应地,该实施例中,确定单元用于:根据车辆的行驶状态和滤波得到的车道线的车道线信息,确定车辆驶出车道线的估计距离和/或车辆驶出车道线的估计时间。
在其中一些可选示例中,滤波单元,用于根据车道线信息中参数的参数值与基于上一帧图像获得的车道线的历史车道线信息中参数的参数值,对车道线信息中参数的参数值进行卡尔曼滤波;上一帧图像为图像所在视频中检测时序位于图像之前的一帧图像。
相应地,在另一些可选示例中,确定模块还可以包括:选取单元,用于选取车道线信息中参数的参数值相对于历史车道线信息中对应参数的参数值有变化、且车道线信息中参数的参数值与历史车道线信息中对应参数的参数值之间的差值小于第三预设阈值的车道线信息,以作为有效的车道线信息进行卡尔曼滤波。
在另外一些实施方式中,确定模块根据车辆的行驶状态和车道线检测结果,确定车辆驶出车道线的估计距离时,用于根据车辆在世界坐标系中的位置、以及车道线的车道线信息,确定车辆与车道线之间的估计距离;车辆的行驶状态包括车辆在世界坐标系中的位置。
在另外一些实施方式中,确定模块根据车辆的行驶状态和车道线检测结果,确定车辆驶出车道线的估计事件时,用于根据车辆的速度和车辆在世界坐标系中的位置、以及车道线的车道线信息,确定车辆驶出车道线的估计时间;车辆的行驶状态包括车辆的速度和车辆在世界坐标系中的位置。
再参见图6,在其中一些实施方式中,控制模块可以包括:比较单元,用于将估计距离和/或估计时间与至少一预定阈值进行比较;控制单元,用于在比较结果满足一个或多个预设条件时,进行所满足的预设条件相应的智能驾驶控制;智能驾驶控制包括:自动驾驶控制和/或辅助驾驶控制。
在其中一些实施方式中,对车辆进行的智能驾驶控制,例如可以包括但不限于对车辆进行如下至少一项控制:自动驾驶控制,辅助驾驶控制,等等。其中,对车辆的自动驾驶控制,例如可以包括但不限于对车辆进行如下以下任意一项或多项控制:制动、减速、改变行驶方向、车道线保持、驾驶模式切换控制,等等控制车辆驾驶状态的操作。对车辆的辅助驾驶控制,例如可以包括但不限于对车辆进行如下以下任意一项或多项控制:进行车道线偏离预警,进行车道线保持提示,等等有助于提示驾驶员控制车辆驾驶状态的操作。
可选地,在上述实施方式中,在预设条件包括多个时,该多个预设条件分别对应的智能驾驶控制的程度可以逐级递增。
在其中一些实施方式中,控制单元用于:若估计距离小于或等于第四预设阈值、且大于第五预设阈值,对车辆进行车道线偏离提示;或者,若估计时间小于或等于第六预设阈值、且大于第七预设阈值,对车辆进行车道线偏离提示;或者,若估计距离小于或等于第 四预设阈值、且大于第五预设阈值,且估计时间小于或等于第六预设阈值、且大于第七预设阈值,对车辆进行车道线偏离提示。其中,车道线偏离预警包括车道线偏离提示;第五预设阈值小于第四预设阈值,第七预设阈值小于第六预设阈值。
在其中一些实施方式中,控制单元还可用于:若估计距离小于或等于第五预设阈值,对车辆进行自动驾驶控制和/或车道线偏离报警;或者,若估计时间小于或等于第七预设阈值,对车辆进行自动驾驶控制和/或车道线偏离报警;或者,若估计距离小于或等于第五预设阈值,且估计时间小于或等于第七预设阈值,对车辆进行自动驾驶控制和/或车道线偏离报警。其中,车道线偏离预警包括车道线偏离报警。
在进一步的一些实施方式中,控制单元还可用于:若估计距离小于或等于第五预设阈值,对车辆进行自动驾驶控制和/或车道线偏离报警时,用于若基于图像以及历史帧图像确定出的估计距离均小于或等于第五预设阈值,对车辆进行自动驾驶控制和/或车道线偏离报警;历史帧图像包括图像所在视频中检测时序位于图像之前的至少一帧图像;或者,若估计时间小于或等于第七预设阈值,对车辆进行自动驾驶控制和/或车道线偏离报警时,用于:若基于图像以及历史帧图像确定出的估计时间均小于或等于第七预设阈值,对车辆进行自动驾驶控制和/或车道线偏离报警;或者,若估计距离小于或等于第五预设阈值,且估计时间小于或等于第七预设阈值,对车辆进行自动驾驶控制和/或车道线偏离报警时,用于:若基于图像以及历史帧图像确定出的估计距离均小于或等于第五预设阈值、且基于图像以及历史帧图像确定出的估计时间均小于或等于第七预设阈值,对车辆进行自动驾驶控制和/或车道线偏离报警。
本公开实施例还提供了一种电子设备,包括本公开上述任一实施例的基于车道线的智能驾驶控制装置。
本公开实施例还提供了另一种电子设备,包括:存储器,用于存储可执行指令;以及处理器,用于与存储器通信以执行可执行指令从而完成本公开上述任一实施例的基于车道线的智能驾驶控制方法的操作。
图7为本公开电子设备一个应用实施例的结构示意图。下面参考图7,其示出了适于用来实现本公开实施例的终端设备或服务器的电子设备的结构示意图。如图7所示,该电子设备包括一个或多个处理器、通信部等,所述一个或多个处理器例如:一个或多个中央处理单元(CPU),和/或一个或多个图像处理器(GPU)等,处理器可以根据存储在只读存储器(ROM)中的可执行指令或者从存储部分加载到随机访问存储器(RAM)中的可执行指令而执行各种适当的动作和处理。通信部可包括但不限于网卡,所述网卡可包括但不限于IB(Infiniband)网卡,处理器可与只读存储器和/或随机访问存储器中通信以执行可执行指令,通过总线与通信部相连、并经通信部与其他目标设备通信,从而完成本公开实施例提供的任一基于车道线的智能驾驶控制方法对应的操作,例如,获取车辆行驶环境的车道线检测结果;根据所述车辆的行驶状态和车道线检测结果,确定所述车辆驶出所述车道线的估计距离和/或所述车辆驶出所述车道线的估计时间;根据所述估计距离和/或所述估计时间,对所述车辆进行智能驾驶控制。
此外,在RAM中,还可存储有装置操作所需的各种程序和数据。CPU、ROM以及RAM通过总线彼此相连。在有RAM的情况下,ROM为可选模块。RAM存储可执行指令,或在运行时向ROM中写入可执行指令,可执行指令使处理器执行本公开上述任一基于车道线的智能驾驶控制方法对应的操作。输入/输出(I/O)接口也连接至总线。通信部可以 集成设置,也可以设置为具有多个子模块(例如多个IB网卡),并在总线链接上。
以下部件连接至I/O接口:包括键盘、鼠标等的输入部分;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分;包括硬盘等的存储部分;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分。通信部分经由诸如因特网的网络执行通信处理。驱动器也根据需要连接至I/O接口。可拆卸介质,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器上,以便于从其上读出的计算机程序根据需要被安装入存储部分。
需要说明的,如图7所示的架构仅为一种可选实现方式,在实践过程中,可根据实际需要对上述图7的部件数量和类型进行选择、删减、增加或替换;在不同功能部件设置上,也可采用分离设置或集成设置等实现方式,例如GPU和CPU可分离设置或者可将GPU集成在CPU上,通信部可分离设置,也可集成设置在CPU或GPU上,等等。这些可替换的实施方式均落入本公开公开的保护范围。
另外,本公开实施例还提供了一种计算机存储介质,用于存储计算机可读取的指令,该指令被执行时实现本公开上述任一实施例的基于车道线的智能驾驶控制方法的操作。
另外,本公开实施例还提供了一种计算机程序,包括计算机可读取的指令,当该计算机可读取的指令在设备中运行时,该设备中的处理器执行用于实现本公开上述任一实施例的基于车道线的智能驾驶控制方法中的步骤的可执行指令。
本说明书中各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似的部分相互参见即可。对于系统实施例而言,由于其与方法实施例基本对应,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
可能以许多方式来实现本公开的方法和装置。例如,可通过软件、硬件、固件或者软件、硬件、固件的任何组合来实现本公开的方法和装置。用于所述方法的步骤的上述顺序仅是为了进行说明,本公开的方法的步骤不限于以上描述的顺序,除非以其它方式特别说明。此外,在一些实施例中,还可将本公开实施为记录在记录介质中的程序,这些程序包括用于实现根据本公开的方法的机器可读指令。因而,本公开还覆盖存储用于执行根据本公开的方法的程序的记录介质。
本公开的描述是为了示例和描述起见而给出的,而并不是无遗漏的或者将本公开限于所公开的形式。很多修改和变化对于本领域的普通技术人员而言是显然的。选择和描述实施例是为了更好说明本公开的原理和实际应用,并且使本领域的普通技术人员能够理解本公开从而设计适于特定用途的带有各种修改的各种实施例。

Claims (49)

  1. 一种基于车道线的智能驾驶控制方法,其特征在于,包括:
    获取车辆行驶环境的车道线检测结果;
    根据所述车辆的行驶状态和车道线检测结果,确定所述车辆驶出所述车道线的估计距离和/或所述车辆驶出所述车道线的估计时间;
    根据所述估计距离和/或所述估计时间,对所述车辆进行智能驾驶控制。
  2. 根据权利要求1所述的方法,其特征在于,所述获取车辆行驶环境的车道线检测结果,包括:
    基于神经网络检测所述车辆行驶环境的车道线,得到所述车道线检测结果;或者,从高级驾驶辅助系统获取所述车辆行驶环境的车道线检测结果。
  3. 根据权利要求2所述的方法,其特征在于,所述基于神经网络检测所述车辆行驶环境的车道线,得到所述车道线检测结果,包括:
    通过神经网络对包括所述车辆行驶环境的图像进行语义分割,输出车道线概率图;所述车道线概率图用于表示所述图像中的至少一个像素点分别属于车道线的概率值;
    根据所述车道线概率图确定车道线所在区域;所述车道线检测结果包括所述车道线所在区域。
  4. 根据权利要求3所述的方法,其特征在于,所述通过神经网络对包括所述车辆行驶环境的图像进行语义分割,输出车道线概率图,包括:
    通过所述神经网络对所述图像进行特征提取,得到特征图;
    通过所述神经网络对所述特征图进行语义分割,得到N条车道线的车道线概率图;每条车道的车道线概率图中各像素点的像素值表示所述图像中对应像素点分别属于该条车道线的概率值,N的取值为大于0的整数。
  5. 根据权利要求4所述的方法,其特征在于,所述通过所述神经网络对所述特征图进行语义分割,得到N条车道线的车道线概率图,包括:
    通过所述神经网络对所述特征图进行语义分割,得到N+1个通道的概率图;所述N+1个通道分别对应于N条车道线和背景;
    从所述N+1个通道的概率图中获取所述N条车道线的车道线概率图。
  6. 根据权利要求4或5所述的方法,其特征在于,
    N的取值为2,所述N+1个通道分别对应于背景、左车道线和右车道线;或者,
    N的取值为3,所述N+1个通道分别对应于背景、左车道线、中车道线和右车道线;或者,
    N的取值为4,所述N+1个通道分别对应于背景、左左车道线、左车道线、右车道线和右右车道线。
  7. 根据权利要求4-6任一所述的方法,其特征在于,根据一条车道线的车道线概率图确定所述车道线所在区域,包括:
    从车道线的车道线概率图中选取概率值大于第一预设阈值的像素点;
    基于选取出的像素点在所述车道线概率图中进行最大连通域查找,找出属于所述车道线的像素点集合;
    基于属于所述车道线的像素点集合确定所述车道线所在区域。
  8. 根据权利要求7所述的方法,其特征在于,所述基于属于所述车道线的像素点集合确定所述车道线所在区域,包括:
    统计属于所述车道线的像素点集合中所有像素点的概率值之和,得到所述车道线的置信度;
    若所述置信度大于第二预设阈值,以所述像素点集合形成的区域作为所述车道线所在 区域。
  9. 根据权利要求3-8任一所述的方法,其特征在于,还包括:
    对包括所述车辆行驶环境的原始图像进行预处理;
    所述通过神经网络对包括所述车辆行驶环境的图像进行语义分割,包括:通过所述神经网络,对预处理得到的所述图像进行语义分割。
  10. 根据权利要求3-9任一所述的方法,其特征在于,所述根据所述车辆的行驶状态和车道线检测结果,确定所述车辆驶出所述车道线的估计距离和/或所述车辆驶出所述车道线的估计时间,包括:
    分别对每条所述车道线所在区域中的像素点进行曲线拟合,得到每条所述车道线的车道线信息;所述车道线信息包括所述车道线上至少一点到所述车辆的距离;
    根据所述车辆的行驶状态和所述车道线的车道线信息,确定所述车辆驶出所述车道线的估计距离和/或所述车辆驶出所述车道线的估计时间。
  11. 根据权利要求10所述的方法,其特征在于,所述对所述车道线所在区域中的像素点进行曲线拟合,得到所述车道线的车道线信息,包括:
    从一条所述车道线所在区域中选取多个像素点;
    将所述多个像素点从所述摄像头所在的相机坐标系转换到世界坐标系中,得到所述多个像素点在世界坐标系中的坐标;
    根据所述多个像素点在世界坐标系中的坐标,在世界坐标系中对所述多个像素点进行曲线拟合,得到所述车道线的车道线信息。
  12. 根据权利要求10或11所述的方法,其特征在于,所述得到所述车道线的车道线信息之后,还包括:
    对所述车道线的车道线信息中的参数进行滤波;
    所述根据所述车辆的行驶状态和所述车道线的车道线信息,确定所述车辆驶出所述车道线的估计距离和/或所述车辆驶出所述车道线的估计时间,包括:根据所述车辆的行驶状态和滤波得到的所述车道线的车道线信息,确定所述车辆驶出所述车道线的估计距离和/或所述车辆驶出所述车道线的估计时间。
  13. 根据权利要求12所述的方法,其特征在于,所述对所述车道线的车道线信息中的参数进行滤波,包括:
    根据所述车道线信息中参数的参数值与基于上一帧图像获得的所述车道线的历史车道线信息中参数的参数值,对所述车道线信息中参数的参数值进行卡尔曼滤波;所述上一帧图像为所述图像所在视频中检测时序位于所述图像之前的一帧图像。
  14. 根据权利要求13所述的方法,其特征在于,所述对所述车道线信息中参数的参数值进行卡尔曼滤波之前,还包括:
    选取所述车道线信息中参数的参数值相对于所述历史车道线信息中对应参数的参数值有变化、且所述车道线信息中参数的参数值与所述历史车道线信息中对应参数的参数值之间的差值小于第三预设阈值的所述车道线信息,以作为有效的车道线信息进行卡尔曼滤波。
  15. 根据权利要求10-14任一所述的方法,其特征在于,所述根据所述车辆的行驶状态和车道线检测结果,确定所述车辆驶出所述车道线的估计距离,包括:
    根据所述车辆在世界坐标系中的位置、以及所述车道线的车道线信息,确定所述车辆与所述车道线之间的估计距离;所述车辆的行驶状态包括所述车辆在世界坐标系中的位置。
  16. 根据权利要求10-14任一所述的方法,其特征在于,所述根据所述车辆的行驶状态和车道线检测结果,确定所述车辆驶出所述车道线的估计时间,包括:
    根据所述车辆的速度和所述车辆在世界坐标系中的位置、以及所述车道线的车道线信息,确定所述车辆驶出所述车道线的估计时间;所述车辆的行驶状态包括所述车辆的速度 和所述车辆在世界坐标系中的位置。
  17. 根据权利要求1-16任一所述的方法,其特征在于,所述根据所述估计距离和/或所述估计时间,对所述车辆进行智能驾驶控制,包括:
    将所述估计距离和/或所述估计时间与至少一预定阈值进行比较;
    在比较结果满足一个或多个预设条件时,进行所满足的预设条件相应的智能驾驶控制;所述智能驾驶控制包括:自动驾驶控制和/或辅助驾驶控制。
  18. 根据权利要求17所述的方法,其特征在于,所述自动驾驶控制包括以下任意一项或多项:制动、减速、改变行驶方向、车道线保持、驾驶模式切换控制。
  19. 根据权利要求18所述的方法,其特征在于,所述对所述车辆进行辅助驾驶控制包括:进行车道线偏离预警;或者,进行车道线保持提示。
  20. 根据权利要求17-19任一所述的方法,其特征在于,在所述预设条件包括多个时,多个预设条件分别对应的智能驾驶控制的程度逐级递增。
  21. 根据权利要求20所述的方法,其特征在于,所述在比较结果满足一个或多个预设条件时,进行所满足的预设条件相应的智能驾驶控制,包括:
    若所述估计距离小于或等于第四预设阈值、且大于第五预设阈值,对所述车辆进行车道线偏离提示;或者,
    若所述估计时间小于或等于第六预设阈值、且大于第七预设阈值,对所述车辆进行车道线偏离提示;或者,
    若所述估计距离小于或等于第四预设阈值、且大于第五预设阈值,且所述估计时间小于或等于第六预设阈值、且大于第七预设阈值,对所述车辆进行车道线偏离提示;
    其中,所述车道线偏离预警包括所述车道线偏离提示;所述第五预设阈值小于所述第四预设阈值,所述第七预设阈值小于所述第六预设阈值。
  22. 根据权利要求21所述的方法,其特征在于,所述在比较结果满足一个或多个预设条件时,进行所满足的预设条件相应的智能驾驶控制,还包括:
    若所述估计距离小于或等于所述第五预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警;或者,
    若所述估计时间小于或等于所述第七预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警;或者,
    若所述估计距离小于或等于所述第五预设阈值,且所述估计时间小于或等于所述第七预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警;
    其中,所述车道线偏离预警包括所述车道线偏离报警。
  23. 根据权利要求22所述的方法,其特征在于,
    所述若所述估计距离小于或等于所述第五预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警,包括:若基于所述图像以及历史帧图像确定出的所述估计距离均小于或等于所述第五预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警;所述历史帧图像包括所述图像所在视频中检测时序位于所述图像之前的至少一帧图像;或者,
    所述若所述估计时间小于或等于所述第七预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警,包括:若基于所述图像以及历史帧图像确定出的所述估计时间均小于或等于所述第七预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警;或者,
    所述若所述估计距离小于或等于所述第五预设阈值,且所述估计时间小于或等于所述第七预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警,包括:若基于所述图像以及历史帧图像确定出的所述估计距离均小于或等于所述第五预设阈值、且基于所述图像以及历史帧图像确定出的所述估计时间均小于或等于所述第七预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警。
  24. 一种基于车道线的智能驾驶控制装置,其特征在于,包括:
    获取模块,用于获取车辆行驶环境的车道线检测结果;
    确定模块,用于根据所述车辆的行驶状态和车道线检测结果,确定所述车辆驶出所述车道线的估计距离和/或所述车辆驶出所述车道线的估计时间;
    控制模块,用于根据所述估计距离和/或所述估计时间,对所述车辆进行智能驾驶控制。
  25. 根据权利要求24所述的装置,其特征在于,所述获取模块包括:
    检测单元,用于基于神经网络检测所述车辆行驶环境的车道线,得到所述车道线检测结果;或者,
    获取单元,用于从高级驾驶辅助系统获取所述车辆行驶环境的车道线检测结果。
  26. 根据权利要求25所述的装置,其特征在于,所述检测单元包括:
    神经网络,用于对包括所述车辆行驶环境的图像进行语义分割,输出车道线概率图;所述车道线概率图用于表示所述图像中的至少一个像素点分别属于车道线的概率值;
    确定子单元,用于根据所述车道线概率图确定车道线所在区域;所述车道线检测结果包括所述车道线所在区域。
  27. 根据权利要求26所述的装置,其特征在于,所述神经网络用于:通过所述神经网络对所述图像进行特征提取,得到特征图;以及通过所述神经网络对所述特征图进行语义分割,得到N条车道线的车道线概率图;每条车道的车道线概率图中各像素点的像素值表示所述图像中对应像素点分别属于该条车道线的概率值,N的取值为大于0的整数。
  28. 根据权利要求27所述的装置,其特征在于,所述神经网络对所述特征图进行语义分割,得到N条车道线的车道线概率图时,用于:通过所述神经网络对所述特征图进行语义分割,得到N+1个通道的概率图;所述N+1个通道分别对应于N条车道线和背景;以及从所述N+1个通道的概率图中获取所述N条车道线的车道线概率图。
  29. 根据权利要求27或28所述的装置,其特征在于,
    N的取值为2,所述N+1个通道分别对应于背景、左车道线和右车道线;或者,
    N的取值为3,所述N+1个通道分别对应于背景、左车道线、中车道线和右车道线;或者,
    N的取值为4,所述N+1个通道分别对应于背景、左左车道线、左车道线、右车道线和右右车道线。
  30. 根据权利要求27-29任一所述的装置,其特征在于,所述确定子单元用于:从车道线的车道线概率图中选取概率值大于第一预设阈值的像素点;基于选取出的像素点在所述车道线概率图中进行最大连通域查找,找出属于所述车道线的像素点集合;以及基于属于所述车道线的像素点集合确定所述车道线所在区域。
  31. 根据权利要求30所述的装置,其特征在于,所述确定子单元基于属于所述车道线的像素点集合确定所述车道线所在区域时,用于:统计属于所述车道线的像素点集合中所有像素点的概率值之和,得到所述车道线的置信度;若所述置信度大于第二预设阈值,以所述像素点集合形成的区域作为所述车道线所在区域。
  32. 根据权利要求26-31任一所述的装置,其特征在于,还包括:
    预处理模块,用于对包括所述车辆行驶环境的原始图像进行预处理;
    所述神经网络对包括所述车辆行驶环境的图像进行语义分割时,用于对预处理得到的所述图像进行语义分割。
  33. 根据权利要求26-32任一所述的装置,其特征在于,所述确定模块包括:
    拟合处理单元,用于分别对每条所述车道线所在区域中的像素点进行曲线拟合,得到每条所述车道线的车道线信息;所述车道线信息包括所述车道线上至少一点到所述车辆的距离;
    确定单元,用于根据所述车辆的行驶状态和所述车道线的车道线信息,确定所述车辆驶出所述车道线的估计距离和/或所述车辆驶出所述车道线的估计时间。
  34. 根据权利要求33所述的装置,其特征在于,所述拟合处理单元,用于:从一条所述车道线所在区域中选取多个像素点;将所述多个像素点从所述摄像头所在的相机坐标系转换到世界坐标系中,得到所述多个像素点在世界坐标系中的坐标;以及根据所述多个像素点在世界坐标系中的坐标,在世界坐标系中对所述多个像素点进行曲线拟合,得到所述车道线的车道线信息。
  35. 根据权利要求33或34所述的装置,其特征在于,所述确定模块还包括:
    滤波单元,用于对所述车道线的车道线信息中的参数进行滤波;
    所述确定单元用于:根据所述车辆的行驶状态和滤波得到的所述车道线的车道线信息,确定所述车辆驶出所述车道线的估计距离和/或所述车辆驶出所述车道线的估计时间。
  36. 根据权利要求35所述的装置,其特征在于,所述滤波单元,用于根据所述车道线信息中参数的参数值与基于上一帧图像获得的所述车道线的历史车道线信息中参数的参数值,对所述车道线信息中参数的参数值进行卡尔曼滤波;所述上一帧图像为所述图像所在视频中检测时序位于所述图像之前的一帧图像。
  37. 根据权利要求36所述的装置,其特征在于,所述确定模块还包括:
    选取单元,用于选取所述车道线信息中参数的参数值相对于所述历史车道线信息中对应参数的参数值有变化、且所述车道线信息中参数的参数值与所述历史车道线信息中对应参数的参数值之间的差值小于第三预设阈值的所述车道线信息,以作为有效的车道线信息进行卡尔曼滤波。
  38. 根据权利要求33-37任一所述的装置,其特征在于,所述确定模块根据所述车辆的行驶状态和车道线检测结果,确定所述车辆驶出所述车道线的估计距离时,用于根据所述车辆在世界坐标系中的位置、以及所述车道线的车道线信息,确定所述车辆与所述车道线之间的估计距离;所述车辆的行驶状态包括所述车辆在世界坐标系中的位置。
  39. 根据权利要求33-37任一所述的装置,其特征在于,所述确定模块根据所述车辆的行驶状态和车道线检测结果,确定所述车辆驶出所述车道线的估计事件时,用于根据所述车辆的速度和所述车辆在世界坐标系中的位置、以及所述车道线的车道线信息,确定所述车辆驶出所述车道线的估计时间;所述车辆的行驶状态包括所述车辆的速度和所述车辆在世界坐标系中的位置。
  40. 根据权利要求24-39任一所述的装置,其特征在于,所述控制模块包括:
    比较单元,用于将所述估计距离和/或所述估计时间与至少一预定阈值进行比较;
    控制单元,用于在比较结果满足一个或多个预设条件时,进行所满足的预设条件相应的智能驾驶控制;所述智能驾驶控制包括:自动驾驶控制和/或辅助驾驶控制。
  41. 根据权利要求40所述的装置,其特征在于,所述自动驾驶控制包括以下任意一项或多项:制动、减速、改变行驶方向、车道线保持、驾驶模式切换控制。
  42. 根据权利要求41所述的装置,其特征在于,所述控制单元对所述车辆进行辅助驾驶控制时,用于进行车道线偏离预警;或者,进行车道线保持提示。
  43. 根据权利要求40-42任一所述的装置,其特征在于,在所述预设条件包括多个时,多个预设条件分别对应的智能驾驶控制的程度逐级递增。
  44. 根据权利要求43所述的装置,其特征在于,所述控制单元,用于:
    若所述估计距离小于或等于第四预设阈值、且大于第五预设阈值,对所述车辆进行车道线偏离提示;或者,
    若所述估计时间小于或等于第六预设阈值、且大于第七预设阈值,对所述车辆进行车道线偏离提示;或者,
    若所述估计距离小于或等于第四预设阈值、且大于第五预设阈值,且所述估计时间小于或等于第六预设阈值、且大于第七预设阈值,对所述车辆进行车道线偏离提示;
    其中,所述车道线偏离预警包括所述车道线偏离提示;所述第五预设阈值小于所述第 四预设阈值,所述第七预设阈值小于所述第六预设阈值。
  45. 根据权利要求44所述的装置,其特征在于,所述控制单元,还用于:
    若所述估计距离小于或等于所述第五预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警;或者,
    若所述估计时间小于或等于所述第七预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警;或者,
    若所述估计距离小于或等于所述第五预设阈值,且所述估计时间小于或等于所述第七预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警;
    其中,所述车道线偏离预警包括所述车道线偏离报警。
  46. 根据权利要求45所述的装置,其特征在于,所述控制单元:
    若所述估计距离小于或等于所述第五预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警时,用于若基于所述图像以及历史帧图像确定出的所述估计距离均小于或等于所述第五预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警;所述历史帧图像包括所述图像所在视频中检测时序位于所述图像之前的至少一帧图像;或者,
    若所述估计时间小于或等于所述第七预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警时,用于:若基于所述图像以及历史帧图像确定出的所述估计时间均小于或等于所述第七预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警;或者,
    若所述估计距离小于或等于所述第五预设阈值,且所述估计时间小于或等于所述第七预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警时,用于:若基于所述图像以及历史帧图像确定出的所述估计距离均小于或等于所述第五预设阈值、且基于所述图像以及历史帧图像确定出的所述估计时间均小于或等于所述第七预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警。
  47. 一种电子设备,其特征在于,包括:
    存储器,用于存储计算机程序;
    处理器,用于执行所述存储器中存储的计算机程序,且所述计算机程序被执行时,实现上述权利要求1-23任一所述的方法。
  48. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该计算机程序被处理器执行时,实现上述权利要求1-23任一所述的方法。
  49. 一种计算机程序,包括计算机指令,其特征在于,当所述计算机指令在设备的处理器中运行时,实现上述权利要求1-23任一所述的方法。
PCT/CN2019/087622 2018-05-31 2019-05-20 基于车道线的智能驾驶控制方法和装置、电子设备 WO2019228211A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2020554361A JP7024115B2 (ja) 2018-05-31 2019-05-20 区画線に基づくインテリジェントドライブ制御方法および装置、ならびに電子機器
SG11202005094XA SG11202005094XA (en) 2018-05-31 2019-05-20 Lane line-based intelligent driving control method and apparatus, and electronic device
US16/886,163 US11314973B2 (en) 2018-05-31 2020-05-28 Lane line-based intelligent driving control method and apparatus, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810551908.XA CN108875603B (zh) 2018-05-31 2018-05-31 基于车道线的智能驾驶控制方法和装置、电子设备
CN201810551908.X 2018-05-31

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/886,163 Continuation US11314973B2 (en) 2018-05-31 2020-05-28 Lane line-based intelligent driving control method and apparatus, and electronic device

Publications (1)

Publication Number Publication Date
WO2019228211A1 true WO2019228211A1 (zh) 2019-12-05

Family

ID=64335045

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/087622 WO2019228211A1 (zh) 2018-05-31 2019-05-20 基于车道线的智能驾驶控制方法和装置、电子设备

Country Status (5)

Country Link
US (1) US11314973B2 (zh)
JP (1) JP7024115B2 (zh)
CN (1) CN108875603B (zh)
SG (1) SG11202005094XA (zh)
WO (1) WO2019228211A1 (zh)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111160370A (zh) * 2019-12-27 2020-05-15 深圳佑驾创新科技有限公司 车头位置估计方法、装置、计算机设备和存储介质
CN111721316A (zh) * 2020-06-22 2020-09-29 重庆大学 一种高性能的车道线识别感兴趣区域预测方法
CN112287842A (zh) * 2020-10-29 2021-01-29 恒大新能源汽车投资控股集团有限公司 一种车道线的识别方法、装置及电子设备
CN112364822A (zh) * 2020-11-30 2021-02-12 重庆电子工程职业学院 一种自动驾驶视频语义分割系统及方法
CN112906665A (zh) * 2021-04-06 2021-06-04 北京车和家信息技术有限公司 交通标线融合方法、装置、存储介质及电子设备
CN113657265A (zh) * 2021-08-16 2021-11-16 长安大学 一种车辆距离探测方法、系统、设备及介质
US11318958B2 (en) * 2020-11-30 2022-05-03 Beijing Baidu Netcom Science Technology Co., Ltd. Vehicle driving control method, apparatus, vehicle, electronic device and storage medium
CN114565681A (zh) * 2022-03-01 2022-05-31 禾多科技(北京)有限公司 一种相机标定方法、装置、设备、介质及产品
CN114743178A (zh) * 2021-12-29 2022-07-12 北京百度网讯科技有限公司 道路边缘线生成方法、装置、设备及存储介质
EP4202759A4 (en) * 2020-09-09 2023-10-25 Huawei Technologies Co., Ltd. TRAFFIC LANE LINE DETECTION METHOD, ASSOCIATED DEVICE AND COMPUTER READABLE STORAGE MEDIUM

Families Citing this family (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875603B (zh) * 2018-05-31 2021-06-04 上海商汤智能科技有限公司 基于车道线的智能驾驶控制方法和装置、电子设备
CN109147368A (zh) * 2018-08-22 2019-01-04 北京市商汤科技开发有限公司 基于车道线的智能驾驶控制方法装置与电子设备
CN110858405A (zh) * 2018-08-24 2020-03-03 北京市商汤科技开发有限公司 车载摄像头的姿态估计方法、装置和系统及电子设备
KR102633140B1 (ko) * 2018-10-23 2024-02-05 삼성전자주식회사 주행 정보를 결정하는 방법 및 장치
CN111209777A (zh) * 2018-11-21 2020-05-29 北京市商汤科技开发有限公司 车道线检测方法、装置、电子设备及可读存储介质
JP6852141B2 (ja) * 2018-11-29 2021-03-31 キヤノン株式会社 情報処理装置、撮像装置、情報処理装置の制御方法、および、プログラム
CN109298719B (zh) * 2018-12-04 2021-11-02 奇瑞汽车股份有限公司 智能汽车的接管方法、装置及存储介质
CN109582019B (zh) * 2018-12-04 2021-06-29 奇瑞汽车股份有限公司 智能汽车变道失效时的接管方法、装置及存储介质
CN109472251B (zh) * 2018-12-16 2022-04-05 华为技术有限公司 一种物体碰撞预测方法及装置
CN111316337A (zh) * 2018-12-26 2020-06-19 深圳市大疆创新科技有限公司 车载成像装置的安装参数的确定与驾驶控制方法及设备
CN109598943A (zh) * 2018-12-30 2019-04-09 北京旷视科技有限公司 车辆违章的监控方法、装置及系统
CN111460866B (zh) * 2019-01-22 2023-12-22 北京市商汤科技开发有限公司 车道线检测及驾驶控制方法、装置和电子设备
CN111476062A (zh) * 2019-01-23 2020-07-31 北京市商汤科技开发有限公司 车道线检测方法、装置、电子设备及驾驶系统
CN111476057B (zh) * 2019-01-23 2024-03-26 北京市商汤科技开发有限公司 车道线获取方法及装置、车辆驾驶方法及装置
CN109866684B (zh) * 2019-03-15 2021-06-22 江西江铃集团新能源汽车有限公司 车道偏离预警方法、系统、可读存储介质及计算机设备
CN112131914B (zh) * 2019-06-25 2022-10-21 北京市商汤科技开发有限公司 车道线属性检测方法、装置、电子设备及智能设备
CN110781768A (zh) * 2019-09-30 2020-02-11 奇点汽车研发中心有限公司 目标对象检测方法和装置、电子设备和介质
CN110706374B (zh) * 2019-10-10 2021-06-29 南京地平线机器人技术有限公司 运动状态预测方法、装置、电子设备及车辆
CN111091096B (zh) * 2019-12-20 2023-07-11 江苏中天安驰科技有限公司 车辆偏离决策方法、装置、存储介质及车辆
CN111079695B (zh) * 2019-12-30 2021-06-01 北京华宇信息技术有限公司 一种人体关键点检测与自学习方法及装置
CN111257005B (zh) * 2020-01-21 2022-11-01 北京百度网讯科技有限公司 用于测试自动驾驶车辆的方法、装置、设备和存储介质
CN111401446A (zh) * 2020-03-16 2020-07-10 重庆长安汽车股份有限公司 单传感器、多传感器车道线合理性检测方法、系统及车辆
US20220009494A1 (en) * 2020-07-07 2022-01-13 Honda Motor Co., Ltd. Control device, control method, and vehicle
CN111814746A (zh) * 2020-08-07 2020-10-23 平安科技(深圳)有限公司 一种识别车道线的方法、装置、设备及存储介质
CN112115857B (zh) * 2020-09-17 2024-03-01 福建牧月科技有限公司 智能汽车的车道线识别方法、装置、电子设备及介质
CN112172829B (zh) * 2020-10-23 2022-05-17 科大讯飞股份有限公司 车道偏离预警方法、装置、电子设备和存储介质
CN114612736A (zh) * 2020-12-08 2022-06-10 广州汽车集团股份有限公司 一种车道线检测方法、系统及计算机可读介质
CN114620059A (zh) * 2020-12-14 2022-06-14 广州汽车集团股份有限公司 一种自动驾驶方法及其系统、计算机可读存储介质
JP7048833B1 (ja) * 2020-12-28 2022-04-05 本田技研工業株式会社 車両制御装置、車両制御方法、およびプログラム
CN112766133A (zh) * 2021-01-14 2021-05-07 金陵科技学院 一种基于ReliefF-DBN的自动驾驶偏离处理方法
CN113053124B (zh) * 2021-03-25 2022-03-15 英博超算(南京)科技有限公司 一种智能车辆的测距系统
CN113188509B (zh) * 2021-04-28 2023-10-24 上海商汤临港智能科技有限公司 一种测距方法、装置、电子设备及存储介质
CN113255506B (zh) * 2021-05-20 2022-10-18 浙江合众新能源汽车有限公司 动态车道线控制方法、系统、设备和计算机可读介质
CN113609980A (zh) * 2021-08-04 2021-11-05 东风悦享科技有限公司 一种用于自动驾驶车辆的车道线感知方法及装置
CN113706705B (zh) * 2021-09-03 2023-09-26 北京百度网讯科技有限公司 用于高精地图的图像处理方法、装置、设备以及存储介质
US11845429B2 (en) * 2021-09-30 2023-12-19 GM Global Technology Operations LLC Localizing and updating a map using interpolated lane edge data
CN114454888B (zh) * 2022-02-22 2023-10-13 福思(杭州)智能科技有限公司 一种车道线预测方法、装置、电子设备及车辆
CN114663529B (zh) * 2022-03-22 2023-08-01 阿波罗智能技术(北京)有限公司 一种外参确定方法、装置、电子设备及存储介质
CN114475641B (zh) * 2022-04-15 2022-06-28 天津所托瑞安汽车科技有限公司 车道偏离预警方法、装置、控制装置及存储介质
CN115082888B (zh) * 2022-08-18 2022-10-25 北京轻舟智航智能技术有限公司 一种车道线检测方法和装置
CN115235500B (zh) * 2022-09-15 2023-04-14 北京智行者科技股份有限公司 基于车道线约束的位姿校正方法及装置、全工况静态环境建模方法及装置
CN116682087B (zh) * 2023-07-28 2023-10-31 安徽中科星驰自动驾驶技术有限公司 基于空间池化网络车道检测的自适应辅助驾驶方法
CN117437792B (zh) * 2023-12-20 2024-04-09 中交第一公路勘察设计研究院有限公司 基于边缘计算的实时道路交通状态监测方法、设备及系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100002911A1 (en) * 2008-07-06 2010-01-07 Jui-Hung Wu Method for detecting lane departure and apparatus thereof
CN101894271A (zh) * 2010-07-28 2010-11-24 重庆大学 汽车偏离车道线角度和距离的视觉计算及预警方法
CN101915672A (zh) * 2010-08-24 2010-12-15 清华大学 车道偏离报警系统的测试装置及测试方法
CN101966838A (zh) * 2010-09-10 2011-02-09 奇瑞汽车股份有限公司 一种车道偏离警示系统
CN105320927A (zh) * 2015-03-25 2016-02-10 中科院微电子研究所昆山分所 车道线检测方法及系统
CN107169468A (zh) * 2017-05-31 2017-09-15 北京京东尚科信息技术有限公司 用于控制车辆的方法和装置
CN108875603A (zh) * 2018-05-31 2018-11-23 上海商汤智能科技有限公司 基于车道线的智能驾驶控制方法和装置、电子设备

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09270098A (ja) * 1996-04-02 1997-10-14 Mitsubishi Motors Corp 車線逸脱警報装置
JP2003104147A (ja) * 2001-09-27 2003-04-09 Mazda Motor Corp 車両の逸脱警報装置
JP5745220B2 (ja) * 2006-06-11 2015-07-08 ボルボ テクノロジー コーポレイション 自動化車線維持システムを用いて車両側方間隔を維持する方法および装置
JP2010191893A (ja) * 2009-02-20 2010-09-02 Nissan Motor Co Ltd 運転不全状態検出装置及び運転不全状態検出方法
JP5389864B2 (ja) * 2011-06-17 2014-01-15 クラリオン株式会社 車線逸脱警報装置
EP2629243A1 (de) * 2012-02-15 2013-08-21 Delphi Technologies, Inc. Verfahren zum Erkennen und Verfolgen von Fahrspurmarkierungen
JP5926080B2 (ja) 2012-03-19 2016-05-25 株式会社日本自動車部品総合研究所 走行区画線認識装置およびプログラム
WO2013186903A1 (ja) * 2012-06-14 2013-12-19 トヨタ自動車株式会社 車線区分標示検出装置、運転支援システム
DE112013004267T5 (de) * 2012-08-30 2015-06-25 Honda Motor Co., Ltd. Fahrbahnmarkierungserkennungsvorrichtung
CN103832433B (zh) * 2012-11-21 2016-08-10 中国科学院沈阳计算技术研究所有限公司 车道偏离及前车防碰撞报警系统及其实现方法
WO2015083009A1 (en) * 2013-12-04 2015-06-11 Mobileye Vision Technologies Ltd. Systems and methods for mimicking a leading vehicle
US9988047B2 (en) * 2013-12-12 2018-06-05 Magna Electronics Inc. Vehicle control system with traffic driving control
EP3292024A4 (en) * 2015-05-06 2018-06-20 Magna Mirrors of America, Inc. Vehicle vision system with blind zone display and alert system
WO2016183074A1 (en) * 2015-05-10 2016-11-17 Mobileye Vision Technologies Ltd. Road profile along a predicted path
US9916522B2 (en) 2016-03-11 2018-03-13 Kabushiki Kaisha Toshiba Training constrained deconvolutional networks for road scene semantic segmentation
US10049279B2 (en) * 2016-03-11 2018-08-14 Qualcomm Incorporated Recurrent networks with motion-based attention for video understanding
JP6672076B2 (ja) * 2016-05-27 2020-03-25 株式会社東芝 情報処理装置及び移動体装置
JP6310503B2 (ja) * 2016-06-06 2018-04-11 本田技研工業株式会社 車両及びレーン変更タイミング判定方法
US10859395B2 (en) * 2016-12-30 2020-12-08 DeepMap Inc. Lane line creation for high definition maps for autonomous vehicles
US11493918B2 (en) * 2017-02-10 2022-11-08 Magna Electronics Inc. Vehicle driving assist system with driver attentiveness assessment
CN106919915B (zh) * 2017-02-22 2020-06-12 武汉极目智能技术有限公司 基于adas系统的地图道路标记及道路质量采集装置及方法
DE112019000070T5 (de) * 2018-01-07 2020-03-12 Nvidia Corporation Führen von fahrzeugen durch fahrzeugmanöver unter verwendung von modellen für maschinelles lernen
WO2019168869A1 (en) * 2018-02-27 2019-09-06 Nvidia Corporation Real-time detection of lanes and boundaries by autonomous vehicles
JP7008617B2 (ja) * 2018-12-21 2022-01-25 本田技研工業株式会社 車両制御装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100002911A1 (en) * 2008-07-06 2010-01-07 Jui-Hung Wu Method for detecting lane departure and apparatus thereof
CN101894271A (zh) * 2010-07-28 2010-11-24 重庆大学 汽车偏离车道线角度和距离的视觉计算及预警方法
CN101915672A (zh) * 2010-08-24 2010-12-15 清华大学 车道偏离报警系统的测试装置及测试方法
CN101966838A (zh) * 2010-09-10 2011-02-09 奇瑞汽车股份有限公司 一种车道偏离警示系统
CN105320927A (zh) * 2015-03-25 2016-02-10 中科院微电子研究所昆山分所 车道线检测方法及系统
CN107169468A (zh) * 2017-05-31 2017-09-15 北京京东尚科信息技术有限公司 用于控制车辆的方法和装置
CN108875603A (zh) * 2018-05-31 2018-11-23 上海商汤智能科技有限公司 基于车道线的智能驾驶控制方法和装置、电子设备

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111160370A (zh) * 2019-12-27 2020-05-15 深圳佑驾创新科技有限公司 车头位置估计方法、装置、计算机设备和存储介质
CN111160370B (zh) * 2019-12-27 2024-02-27 佑驾创新(北京)技术有限公司 车头位置估计方法、装置、计算机设备和存储介质
CN111721316A (zh) * 2020-06-22 2020-09-29 重庆大学 一种高性能的车道线识别感兴趣区域预测方法
EP4202759A4 (en) * 2020-09-09 2023-10-25 Huawei Technologies Co., Ltd. TRAFFIC LANE LINE DETECTION METHOD, ASSOCIATED DEVICE AND COMPUTER READABLE STORAGE MEDIUM
CN112287842A (zh) * 2020-10-29 2021-01-29 恒大新能源汽车投资控股集团有限公司 一种车道线的识别方法、装置及电子设备
US11318958B2 (en) * 2020-11-30 2022-05-03 Beijing Baidu Netcom Science Technology Co., Ltd. Vehicle driving control method, apparatus, vehicle, electronic device and storage medium
CN112364822B (zh) * 2020-11-30 2022-08-19 重庆电子工程职业学院 一种自动驾驶视频语义分割系统及方法
CN112364822A (zh) * 2020-11-30 2021-02-12 重庆电子工程职业学院 一种自动驾驶视频语义分割系统及方法
CN112906665A (zh) * 2021-04-06 2021-06-04 北京车和家信息技术有限公司 交通标线融合方法、装置、存储介质及电子设备
CN113657265A (zh) * 2021-08-16 2021-11-16 长安大学 一种车辆距离探测方法、系统、设备及介质
CN113657265B (zh) * 2021-08-16 2023-10-10 长安大学 一种车辆距离探测方法、系统、设备及介质
CN114743178A (zh) * 2021-12-29 2022-07-12 北京百度网讯科技有限公司 道路边缘线生成方法、装置、设备及存储介质
CN114743178B (zh) * 2021-12-29 2024-03-08 北京百度网讯科技有限公司 道路边缘线生成方法、装置、设备及存储介质
CN114565681A (zh) * 2022-03-01 2022-05-31 禾多科技(北京)有限公司 一种相机标定方法、装置、设备、介质及产品
CN114565681B (zh) * 2022-03-01 2022-11-22 禾多科技(北京)有限公司 一种相机标定方法、装置、设备、介质及产品

Also Published As

Publication number Publication date
JP7024115B2 (ja) 2022-02-22
CN108875603A (zh) 2018-11-23
SG11202005094XA (en) 2020-06-29
CN108875603B (zh) 2021-06-04
US11314973B2 (en) 2022-04-26
US20200293797A1 (en) 2020-09-17
JP2021508901A (ja) 2021-03-11

Similar Documents

Publication Publication Date Title
WO2019228211A1 (zh) 基于车道线的智能驾驶控制方法和装置、电子设备
JP7106664B2 (ja) 知的運転制御方法および装置、電子機器、プログラムならびに媒体
US11643076B2 (en) Forward collision control method and apparatus, electronic device, program, and medium
US11840239B2 (en) Multiple exposure event determination
US9965719B2 (en) Subcategory-aware convolutional neural networks for object detection
US10984266B2 (en) Vehicle lamp detection methods and apparatuses, methods and apparatuses for implementing intelligent driving, media and devices
WO2019114036A1 (zh) 人脸检测方法及装置、计算机装置和计算机可读存储介质
US20210117704A1 (en) Obstacle detection method, intelligent driving control method, electronic device, and non-transitory computer-readable storage medium
Haque et al. A computer vision based lane detection approach
Romdhane et al. An improved traffic signs recognition and tracking method for driver assistance system
CN110781768A (zh) 目标对象检测方法和装置、电子设备和介质
JP2021530048A (ja) 多階層化目標類別方法及び装置、交通標識検出方法及び装置、機器並びに媒体
Mu et al. Multiscale edge fusion for vehicle detection based on difference of Gaussian
Saleh et al. Traffic signs recognition and distance estimation using a monocular camera
Muthalagu et al. Vehicle lane markings segmentation and keypoint determination using deep convolutional neural networks
Gabb et al. High-performance on-road vehicle detection in monocular images
Lin et al. Improved traffic sign recognition for in-car cameras
Virgilio G et al. Vision-based blind spot warning system by deep neural networks
Liu et al. Detection of geometric shape for traffic lane and mark
Wang et al. G-NET: Accurate Lane Detection Model for Autonomous Vehicle
Manoharan et al. Detection of unstructured roads from a single image for autonomous navigation applications
Pydipogu et al. Robust lane detection and object tracking In relation to the intelligence transport system
TWI832270B (zh) 路況檢測方法、電子設備及計算機可讀存儲媒體
EP4224361A1 (en) Lane line detection method and apparatus
Doshi et al. ROI based real time straight lane line detection using Canny Edge Detector and masked bitwise operator

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19810672

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020554361

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19810672

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19810672

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 26/03/2021)