WO2019228211A1 - 基于车道线的智能驾驶控制方法和装置、电子设备 - Google Patents
基于车道线的智能驾驶控制方法和装置、电子设备 Download PDFInfo
- Publication number
- WO2019228211A1 WO2019228211A1 PCT/CN2019/087622 CN2019087622W WO2019228211A1 WO 2019228211 A1 WO2019228211 A1 WO 2019228211A1 CN 2019087622 W CN2019087622 W CN 2019087622W WO 2019228211 A1 WO2019228211 A1 WO 2019228211A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- lane line
- vehicle
- lane
- preset threshold
- driving control
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/70—Labelling scene content, e.g. deriving syntactic or semantic representations
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/167—Driving aids for lane monitoring, lane changing, e.g. blind spot detection
Definitions
- Lane line detection is a key technology in automatic driving and assisted driving. This technology can detect the lane line of a vehicle on the road, so as to determine the current position of the vehicle and provide key information for the next warning.
- the embodiments of the present disclosure provide a technical solution for intelligent driving control based on lane lines.
- a lane driving-based intelligent driving control device including:
- a determining module configured to determine an estimated distance that the vehicle exits the lane line and / or an estimated time that the vehicle exits the lane line according to a driving state of the vehicle and a detection result of the lane line;
- a control module configured to perform intelligent driving control on the vehicle according to the estimated distance and / or the estimated time.
- an electronic device including:
- a processor is configured to execute a computer program stored in the memory, and when the computer program is executed, implement the method according to any one of the foregoing embodiments of the present disclosure.
- a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the method according to any one of the foregoing embodiments of the present disclosure is implemented.
- a computer program including computer instructions, and when the computer instructions are run in a processor of a device, the method according to any one of the foregoing embodiments of the present disclosure is implemented.
- a computer program product for storing computer-readable instructions, which when executed, cause a computer to perform detection of a human keypoint described in any one of the foregoing possible implementation manners method.
- the computer program product is a computer storage medium.
- the computer program product is a software product, such as a Software Development Kit (SDK), etc. .
- a lane line detection result of a vehicle driving environment is obtained, and a vehicle driving is determined according to a vehicle driving state and a lane line detection result
- the embodiment of the present disclosure implements the driving state of the vehicle based on the lane line Intelligent control helps to improve driving safety.
- FIG. 1 is a flowchart of an embodiment of a lane line-based intelligent driving control method according to the present disclosure.
- FIG. 2 is a flowchart of another embodiment of a lane line-based intelligent driving control method according to the present disclosure.
- FIG. 3 is a flowchart of still another embodiment of a lane line-based intelligent driving control method according to the present disclosure.
- FIG. 4 is an example of two lane lines in the embodiment of the present disclosure.
- FIG. 5 is a schematic structural diagram of an embodiment of an intelligent driving control device based on lane lines of the present disclosure.
- FIG. 6 is a schematic structural diagram of another embodiment of a lane line-based intelligent driving control device according to the present disclosure.
- FIG. 7 is a schematic structural diagram of an application embodiment of an electronic device of the present disclosure.
- a plurality may refer to two or more, and “at least one” may refer to one, two, or more.
- Embodiments of the present disclosure can be applied to electronic devices such as terminal devices, computer systems, and servers, which can operate with many other general-purpose or special-purpose computing system environments or configurations.
- Examples of well-known terminal devices, computing systems, environments, and / or configurations suitable for use with electronic devices such as terminal devices, computer systems, servers include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients Computers, handheld or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, small computer systems, mainframe computer systems, and distributed cloud computing technology environments including any of these systems, and more.
- Electronic devices such as a terminal device, a computer system, and a server can be described in the general context of computer system executable instructions (such as program modules) executed by a computer system.
- program modules can include routines, programs, target programs, components, logic, data structures, and so on, which perform specific tasks or implement specific abstract data types.
- the computer system / server can be implemented in a distributed cloud computing environment. In a distributed cloud computing environment, tasks are performed by remote processing devices linked through a communication network. In a distributed cloud computing environment, program modules may be located on a local or remote computing system storage medium including a storage device.
- FIG. 1 is a flowchart of an embodiment of a lane line-based intelligent driving control method according to the present disclosure. As shown in FIG. 1, the lane line-based intelligent driving control method of this embodiment includes:
- the lane line detection result in the vehicle driving environment may be obtained, for example, by detecting the lane line in the vehicle driving environment based on a neural network, for example, using a neural network to image the vehicle driving environment. Perform lane line detection to obtain the lane line detection result; or, directly obtain the lane line detection result in the driving environment of the vehicle from the Advanced Driving Assistance System (ADAS), that is, directly use the lane line detection result in the ADAS;
- ADAS Advanced Driving Assistance System
- the operation 102 may be performed by a processor calling a corresponding instruction stored in a memory, or may be performed by an acquisition module executed by the processor.
- the operation 104 may be performed by a processor calling a corresponding instruction stored in a memory, or may be performed by a determining module executed by the processor.
- the intelligent driving control performed on the vehicle may include, but is not limited to, controlling at least one of the following: automatic driving control, assisted driving control, and the like.
- the automatic driving control of the vehicle may include, but is not limited to, performing any one or more of the following controls on the vehicle: braking, deceleration, changing the driving direction, lane lane keeping, driving mode switching control (for example, from automatic The driving mode is switched to a non-automatic driving mode, the non-automatic driving mode is switched to an automatic driving mode), and so on.
- the driving mode switching control may control the vehicle to switch from an automatic driving mode to a non-automatic driving mode (such as a manual driving mode), or to switch from a non-automatic driving mode to an automatic driving mode.
- the assisted driving control of the vehicle may include, but is not limited to, performing any one or more of the following controls on the vehicle: warning of lane line deviation, prompting of lane line keeping, etc., which help prompt the driver to control the driving state of the vehicle Operation.
- the operation 106 may be performed by a processor calling a corresponding instruction stored in the memory, or may be performed by a control module executed by the processor.
- a lane line detection result of a vehicle driving environment is obtained, and an estimated distance and / or a lane line of the vehicle from the lane line detection result is determined according to the driving state of the vehicle and the lane line detection result.
- the estimated time for the vehicle to drive out of the lane line and perform intelligent driving control such as automatic driving or assisted driving on the vehicle according to the estimated distance and / or estimated time.
- the embodiment of the present disclosure implements intelligent control of the driving state of the vehicle based on the lane line. In order to reduce or avoid traffic accidents when vehicles exit the lane line, it will help improve driving safety.
- FIG. 2 is a flowchart of another embodiment of a lane line-based intelligent driving control method according to the present disclosure. As shown in FIG. 2, the lane driving-based intelligent driving control method of this embodiment includes:
- the lane line probability map is used to indicate a probability value that at least one pixel point in the image belongs to the lane line.
- the neural network in the embodiment of the present disclosure may be a deep neural network, such as a convolutional neural network, which may be obtained by training the neural network in advance by using a sample image and a pre-labeled and accurate lane line probability map.
- training a neural network by using a sample image and an accurate lane line probability map can be achieved, for example, by: performing semantic segmentation on the sample image through a neural network, and outputting a predicted lane line probability map; according to the predicted lane line probability map and the accuracy
- the difference between the corresponding lane line probability maps of at least one pixel point obtain the loss function value of the neural network, and train the neural network based on the loss function value, for example, based on the gradient update training method, back-propagating the gradient through the chain rule , Adjusting the parameter values of the parameters of each network layer in the neural network until a preset condition is satisfied, for example, the difference between the predicted lane line probability map and the accurate lane line probability map at least one pixel point is smaller than the prese
- the operation 202 may be performed by a processor calling a corresponding instruction stored in a memory, or may be performed by a detection unit run by a processor or a neural network in the detection unit.
- the operation 204 may be performed by a processor calling a corresponding instruction stored in the memory, or may be performed by a detection unit or a determination subunit in the detection unit that is run by the processor.
- the operation 206 may be performed by a processor calling a corresponding instruction stored in the memory, or may be performed by a determining module executed by the processor.
- the operation 208 may be performed by a processor calling a corresponding instruction stored in the memory, or may be performed by a control module executed by the processor.
- the image is semantically segmented by a neural network, a lane line probability map is output, and an area where the lane line is located is determined according to the lane line probability map.
- the neural network can be based on deep learning, it can automatically learn the lane line images by learning a large number of labeled lane line images, such as lane lines, lane line missing, road edge edges, dim light, backlighting and other scenes.
- lane lanes can be effectively identified in various driving scenarios to achieve corners, lane lane missing, road edge, and dim light
- Lane line detection in various complex scenarios, such as backlight and backlighting improves the accuracy of lane line detection in order to obtain accurate estimated distance and / or estimated time, thereby helping to improve the accuracy of intelligent driving control and improve driving safety .
- the method may further include: preprocessing the original image including the driving environment of the vehicle to obtain the foregoing driving environment including the vehicle. Image.
- the neural network is used to perform semantic segmentation on the above-mentioned image obtained by preprocessing.
- the neural network pre-processes the original image.
- the original image collected by the camera can be scaled and cropped.
- the original image is scaled and cropped to an image of a preset size.
- the neural network is processed to reduce the image processed by the neural network. The complexity of semantic segmentation, reduces the time-consuming, and improves the processing efficiency.
- the preprocessing of the original image by the neural network may also be to select some quality-selectable images from the original images collected by the camera according to preset image quality (such as image sharpness, exposure, etc.) standards and input them to the neural network for processing. So as to improve the accuracy of semantic segmentation in order to improve the accuracy of lane line detection.
- image quality such as image sharpness, exposure, etc.
- a neural network is used to perform semantic segmentation on an image including the driving environment of the vehicle, and outputting a lane line probability map may include:
- the feature map is semantically segmented by a neural network to obtain a lane line probability map of N lane lines.
- the pixel value of each pixel in the lane line probability map of each lane is used to indicate the probability value that the corresponding pixel point in the image belongs to the lane line, and the value of N is an integer greater than 0. For example, in some alternative examples, N has a value of 4.
- the neural network in the embodiments of the present disclosure may include a network layer for feature extraction and a network layer for classification.
- the network layer used for feature extraction may include, for example, a convolution layer, a batch normalization (BN) layer, and a non-linear layer.
- Feature extraction is performed on the image through the convolutional layer, the BN layer, and the non-linear layer in turn, and a feature map is generated; the feature map is semantically segmented through the network layer for classification, and the lane line probability map of multiple lane lines is obtained.
- the lane line probability map of the N lane lines may be a channel probability map, and the pixel values of each pixel in the probability map respectively represent the probability values of corresponding pixel points in the image belonging to the lane lines.
- the lane line probability map of the above N lane lines may also be a probability map of N + 1 lanes, and the N + 1 lanes respectively correspond to the N lane lines and the background, that is, the probability of N + 1 lanes
- the probability map of each channel in the figure represents the probability that at least one pixel point in the above image belongs to the lane line or background corresponding to the channel, respectively.
- the feature map is semantically segmented through a neural network to obtain a lane line probability map of N lane lines, which may include:
- the feature map is semantically segmented by a neural network to obtain a probability map of N + 1 channels.
- the N + 1 channels respectively correspond to N lane lines and backgrounds, that is, the probability map of each channel in the probability map of N + 1 channels indicates that at least one pixel point in the above image belongs to the lane corresponding to the channel, respectively.
- a lane line probability map of N lane lines is obtained from a probability map of N + 1 channels.
- the neural network in the embodiment of the present disclosure may include a network layer for feature extraction, a network layer for classification, and a normalization (Softmax) layer.
- Feature extraction is performed on the image through each network layer used for feature extraction in order to generate a series of feature maps; the final output feature map is semantically segmented through the network layer used for classification to obtain the lane line probability of N + 1 channels Figure;
- Softmax uses the Softmax layer to normalize the lane line probability map of N + 1 channels to convert the probability value of each pixel in the lane line probability map to a value in the range of 0 to 1.
- the network layer used for classification may multi-classify each pixel in the feature map.
- each pixel in the feature map belongs to five categories (background, left and left lane line, left lane line, right lane line, and Right and right lane lines), and output the probability map of each pixel in the feature map to one of the types, to obtain the probability map of the above N + 1 channels.
- the probability value of each pixel in each probability map is expressed The probability value that a pixel in the image corresponding to this pixel belongs to a certain category.
- N is the number of lane lines in the driving environment of the vehicle, and may be any integer value greater than 0.
- N + 1 channels correspond to the background, left lane line, and right lane line in the driving environment of the vehicle; or when the value of N is 3, N + 1 channels correspond to Background, left lane line, middle lane line, and right lane line in the vehicle driving environment; or, when N is 4, the N + 1 channels correspond to the background, left and left lane lines, Left lane line, right lane line, and right lane line.
- determining the area where the lane line is located according to the lane line probability map of one lane line in operation 204 may include:
- a maximum connected domain search is performed in the lane line probability map to find the pixel point set belonging to the lane line;
- the area where the lane line is located is determined based on the pixel point set belonging to the lane line.
- a breadth-first search algorithm may be used to find the maximum connected area, find all connected areas with probability values greater than a first preset threshold, and then compare the largest areas of all connected areas as the area where the detected lane line is located.
- the output of the neural network is a lane line probability map of multiple lane lines.
- the pixel value of each pixel in the lane line probability map represents the probability value of the pixel in the corresponding image belonging to a lane line.
- the value can be 0 after normalization.
- the pixel points in the lane line probability map that have a high probability that belong to the lane line probability map are selected through the first preset threshold, and then the maximum connected domain search is performed to find the set of pixels that belong to the lane line as the lane line. your region. Perform the above operations for each lane line separately to determine the area where each lane line is located.
- the above determining the area where the lane line is located based on a set of pixels belonging to the lane line may include:
- the area formed by the pixel set is used as the area where the lane line is located.
- the confidence level is a probability value that an area formed by a set of pixel points is a real lane line.
- the second preset threshold is an empirical value set according to actual needs, and can be adjusted according to actual scenarios.
- the confidence level is too small, that is, not greater than the second preset threshold, it indicates that the lane line does not exist, and the determined lane line is discarded; if the confidence level is large, that is, greater than the second preset threshold, it indicates that the determined lane line is located The probability that the lane line is a real one is high, and it is determined as the area where the lane line is located.
- FIG. 3 is a flowchart of another embodiment of a lane line-based intelligent driving control method according to the present disclosure. As shown in FIG. 3, the lane line-based intelligent driving control method of this embodiment includes:
- the lane line probability map is used to indicate a probability value that at least one pixel point in the image belongs to the lane line.
- the operation 302 may be performed by a processor calling a corresponding instruction stored in a memory, or may be performed by a detection unit run by a processor or a neural network in the detection unit.
- the operation 304 may be performed by a processor calling a corresponding instruction stored in the memory, or may be performed by a detection unit or a determination subunit in the detection unit that is run by the processor.
- the lane line information includes a distance from at least one point on the lane line (for example, points on the lane line) to the vehicle.
- lane line information can be a curve, a straight line, a discrete map including at least one point on the lane line and its distance to the vehicle, or a data table, or it can be expressed as an equation. Wait, the embodiment of the present disclosure does not limit the expression form of the lane line information.
- the lane line equation has three parameters (a, b, c).
- the two curves are two lane lines corresponding to the two lane line equations.
- Y - max represents the maximum distance from a point on the lane line to the front vertical direction of the vehicle
- Y - min represents the minimum distance from a point on the lane line to the front vertical direction of the vehicle.
- the operation 306 may be performed by a processor calling a corresponding instruction stored in the memory, or may be performed by a determining module run by the processor or a fitting processing unit in the determining module.
- the operation 308 may be performed by a processor calling a corresponding instruction stored in the memory, or may be performed by a determination module or a determination unit in the determination module that is executed by the processor.
- the operation 310 may be performed by a processor calling a corresponding instruction stored in a memory, or may be performed by a control module executed by the processor.
- the lane line information of each lane line is obtained by performing curve fitting on the pixels in the area where each lane line is located, and based on the driving state of the vehicle and the lane line.
- the information determines the estimated distance of the vehicle from the corresponding lane line and / or the estimated time of the vehicle from the lane line. Since the lane line information obtained by curve fitting can be expressed as a quadratic curve or a similar representation, it can be better posted.
- the curve lane lanes still have good applicability to curves and can be applied to various road conditions for early warning.
- curve fitting is performed on pixels in an area where a lane line is located to obtain lane line information of the lane line, which may include:
- the selected multiple pixels are converted from the camera coordinate system in which the camera is located into the world coordinate system, and the coordinates of the multiple pixels in the world coordinate system are obtained.
- the origin of the world coordinate system can be set according to requirements. For example, the origin can be set as the location where the left front wheel of the vehicle is landing, and the y-axis direction in the world coordinate system is the direction directly in front of the vehicle;
- curve fitting is performed on the plurality of pixel points in the world coordinate system to obtain lane line information of the one lane line.
- some pixels can be randomly selected in the area where a lane line is located.
- the camera calibration parameters also called camera calibration parameters
- these pixels are converted into the world coordinate system, and then these pixels are converted in the world coordinate system.
- Point fitting curve you can get the fitted curve.
- the camera calibration parameters can include internal and external parameters.
- the position and orientation of the camera or camera in the world coordinate system can be determined based on the external parameters.
- the external parameters can include a rotation matrix and a translation matrix.
- the rotation matrix and the translation matrix together describe how to convert points from the world coordinate system to the camera coordinate system. Or vice versa; internal parameters are parameters related to the characteristics of the camera itself, such as the focal length and pixel size of the camera.
- the curve fitting refers to calculating the curve formed by these points through some discrete points.
- a least square method may be used to perform curve fitting based on the multiple pixel points.
- the method may further include: filtering parameters in the lane line information of the lane line to filter out jitter and some abnormal situations, and ensure the stability of the lane line information.
- filtering parameters in the lane line information of a lane line may include:
- the previous frame image is a frame image in which the detection timing is located before the image in the video where the image is located, for example, it may be the image immediately before the image adjacent to it, or it may be the detection timing is located before the image, spaced one frame or Multi-frame image.
- Kalman filtering is an estimation method based on the statistical characteristics of a time-varying random signal to make the future value of the signal as close to the true value as possible.
- the parameter of the parameter in the lane line information Kalman filtering of the values can improve the accuracy of the lane line information and help to accurately determine the distance between the vehicle and the lane line in the subsequent information, so as to accurately warn the vehicle from the lane line.
- the method may further include: selecting lane driving information for the same driving lane.
- the lane line information fitted based on each frame of the image in the video will change, but the adjacent frame images will not change much, so the lane line information of the current frame image can be smoothed, and the jitter and Some abnormal conditions ensure the stability of lane line information.
- a lane line can be determined for the first frame image participating in lane line detection in the video, and a tracker is established for each lane line to track the lane line. If the current frame image detects the same A lane line, and the difference between the parameter values in the lane line information of the same lane line determined from the previous frame image is smaller than the third preset threshold, then the current frame image The parameter values in the lane line information are updated to the tracker of the same lane line determined in the previous frame image to perform Kalman filtering on the lane line information of the same lane line in the current frame image. If the tracker of the same lane line is updated in two consecutive frames of images, it indicates that the determination result of the lane line is more accurate. The tracker of the lane line can be confirmed, and the lane line tracked by the tracker is set as final. Lane line results.
- the tracker is not updated for several consecutive frames, the corresponding lane line is considered to disappear and the tracker is deleted.
- determining the estimated distance of the vehicle from the lane line according to the driving state of the vehicle and the lane line detection result may include:
- the estimated distance between the vehicle and the corresponding lane line is determined according to the vehicle's position in the world coordinate system and the lane line information of the lane line; in this embodiment, the driving state of the vehicle includes the vehicle's in the world coordinate system position.
- determining the estimated time for the vehicle to exit the lane line according to the driving state of the vehicle and the lane line detection result may include:
- the driving state of the vehicle includes the speed of the vehicle and the position of the vehicle in the world coordinate system.
- performing intelligent driving control on the vehicle according to the above estimated distance and / or estimated time may include:
- intelligent driving control corresponding to the satisfied preset conditions is performed, for example, automatic driving control and / or assisted driving control corresponding to the satisfied preset conditions is performed.
- the degree of intelligent driving control corresponding to each of the multiple preset conditions may be gradually increased.
- the degree of intelligent driving control corresponding to multiple preset conditions may be increased step by step.
- corresponding intelligent driving control measures may be adopted to control the vehicle. Carrying out corresponding automatic driving control and / or assisted driving control can effectively prevent vehicles from driving out of lane lanes and avoid traffic accidents without interfering with normal driving, and improve driving safety.
- the intelligent driving control corresponding to the satisfied preset conditions when the comparison result satisfies one or more preset conditions, and when the intelligent driving control corresponding to the satisfied preset conditions is performed, in some optional examples, it may include:
- the vehicle is instructed to depart from the lane line, for example, to remind the vehicle that it has deviated from the current lane, will drive out of the current lane line, or the like; or,
- the vehicle is instructed to depart from the lane line;
- a lane line deviation prompt is provided to the vehicle
- the lane line departure warning includes a lane line departure warning.
- the values of the fourth preset threshold and the fifth preset threshold are greater than 0, and the fifth preset threshold is less than the fourth preset threshold.
- the values of the fourth preset threshold and the fifth preset threshold are 5 Seconds, 3 seconds.
- the values of the sixth preset threshold and the seventh preset threshold are greater than 0, and the seventh preset threshold is less than the sixth preset threshold.
- the values of the sixth preset threshold and the seventh preset threshold are 5 Meters, 3 meters.
- the estimated distance between the vehicle and the lane line is less than or equal to the fourth preset threshold and greater than the fifth preset threshold, or the estimated time that the vehicle is expected to exit the lane line is less than or equal to the sixth preset threshold and greater than the seventh
- a lane line deviation warning is given to the vehicle, which can remind the driver to notice that the vehicle deviates from the lane line, in order to take corresponding driving measures in time, prevent the vehicle from driving out of the lane line, and improve driving safety.
- the lane line departure warning is given by combining the estimated distance between the vehicle and the lane line and the estimated time out of the lane line to improve the accuracy of the lane line departure warning. In a further optional example, it may also include:
- the estimated distance is less than or equal to the fifth preset threshold, perform automatic driving control and / or a lane departure warning on the vehicle; or,
- the estimated distance is less than or equal to the fifth preset threshold and the estimated time is less than or equal to the seventh preset threshold, perform automatic driving control and / or a lane line deviation warning on the vehicle;
- the lane line departure warning includes a lane line departure warning, and the lane line departure warning may be sounded, for example, by sound, light, electricity, or the like.
- the respective corresponding levels of intelligent driving control are increased step by step, ranging from the prompting of lane departure to the vehicle to the automatic driving control of the vehicle and / Or the lane line deviates from the alarm to prevent vehicles from driving out of the lane line and improve driving safety.
- the vehicle when the estimated distances determined based on the image and the historical frame image are both less than or equal to a fifth preset threshold, the vehicle may be subjected to automatic driving control and / or a lane line deviation alarm, where the history The frame image includes at least one frame image in the video where the detection sequence is located before the image; or, when the estimated time determined based on the image and the historical frame image are both less than or equal to the seventh preset threshold, the vehicle is subjected to automatic driving control and / Or lane line deviation alarm; or, the estimated distance determined based on the image and the historical frame image are both less than or equal to the fifth preset threshold, and the estimated time determined based on the image and the historical frame image are less than or equal to the seventh preset threshold When the threshold is set, the vehicle is controlled for automatic driving and / or a lane departure warning is given.
- the evaluation distance and / or the evaluation time of historical frame images are also counted at the same time as the basis for the automatic driving control and / or the lane line departure warning of the vehicle, which can improve the automatic driving control and / or the lane line departure warning of the vehicle. Accuracy.
- the line segment AB is that the vehicle will drive in the current state.
- the absolute position A 'of the vehicle in the world coordinate system can be obtained.
- the intersection position of the straight line A'B of the lane line driving direction and the target lane line position can be calculated.
- B which gives the length of the straight line A'B.
- Collect historical frame image information If the vehicle is about to drive out of the target lane line in several frames, the time is too short (less than the seventh preset threshold), and the distance A'B between the vehicle and the target lane line is too short ( Less than the fifth preset threshold value), then automatic driving control and / or lane line deviation alarm is performed, for example, decelerating the vehicle and sounding the alarm at the same time.
- the historical frame image information can be used to calculate the vehicle's lateral speed at the current moment. Based on the current distance of the vehicle from the target lane line, the crimping time of the vehicle from the target lane line at the current moment (that is, the time of arrival) can be calculated. The time of the target lane line) is used as a basis for whether to perform automatic driving control and / or a lane line deviation warning for the vehicle.
- the distance between the vehicle and the target lane line can be obtained according to the setting of the origin of the lane line equation coordinates of the target lane line, the direction of travel of the vehicle, and the width of the vehicle. For example, if the coordinate origin of the lane line equation is set to the left wheel of the vehicle and the target lane line is on the left side of the vehicle, then the distance between the vehicle and its intersection with the direction of travel and the target lane line may be obtained directly. If the origin of the lane line equation is set to the right wheel of the vehicle and the target lane line is on the left side of the vehicle, then the distance between the vehicle and its intersection with the direction of the target lane line is added, and the vehicle width projection is used to drive the vehicle.
- the effective width in the direction is the distance between the vehicle and the target lane line. If the origin of the lane line equation coordinate is set to the center of the vehicle, and the target lane line is on the left side of the vehicle, then the distance between the vehicle and its intersection with the target lane line and the halfway width of the vehicle are projected on it. The effective width in the direction of travel is the estimated distance between the vehicle and the target lane line.
- the embodiments of the present disclosure can be applied to the scenarios of automatic driving and assisted driving to achieve accurate lane line detection, automatic driving control, and early warning of vehicle departure from lane lines.
- any of the lane line-based intelligent driving control methods provided by the embodiments of the present disclosure may be executed by any appropriate device having a data processing capability, including but not limited to a terminal device and a server.
- any of the lane line-based intelligent driving control methods provided in the embodiments of the present disclosure may be executed by a processor.
- the processor executes any of the lane line-based reference mentioned in the embodiments of the present disclosure by calling a corresponding instruction stored in a memory. Smart driving control method. I will not repeat them below.
- the foregoing program may be stored in a computer-readable storage medium.
- the program is executed, the program is executed.
- the method includes the steps of the foregoing method embodiment.
- the foregoing storage medium includes: a ROM, a RAM, a magnetic disk, or an optical disk, and other media that can store program codes.
- FIG. 5 is a schematic structural diagram of an embodiment of an intelligent driving control device based on lane lines of the present disclosure.
- the lane line-based intelligent driving control device in this embodiment may be used to implement any one of the lane line-based control method embodiments of the present disclosure.
- the lane driving-based intelligent driving control device in this embodiment includes an acquisition module, a determination module, and a control module. among them:
- the acquisition module is configured to acquire a lane line detection result of a vehicle running environment.
- the determining module is configured to determine an estimated distance of the vehicle from the lane line and / or an estimated time of the vehicle from the lane line according to a driving state of the vehicle and a detection result of the lane line.
- a control module is configured to perform intelligent driving control on the vehicle according to the estimated distance and / or the estimated time.
- the embodiment of the present disclosure Based on the lane line-based intelligent driving control device provided by the foregoing embodiment of the present disclosure, obtain a lane line detection result of a vehicle driving environment, and determine an estimated distance and / or a lane departure of the vehicle based on the driving state and the lane line detection result of the vehicle. The estimated time for the vehicle to drive out of the lane line, and intelligently drive the vehicle according to the estimated distance and / or the estimated time.
- the embodiment of the present disclosure implements intelligent control of the driving state of the vehicle based on the lane line in order to keep the vehicle at Drive in the lane line, reduce or avoid traffic accidents when the vehicle exits the lane line, and improve driving safety.
- the acquisition module may include a detection unit for detecting a lane line of a vehicle driving environment based on a neural network to obtain a lane line detection result; or an acquisition unit for acquiring a vehicle driving environment from an advanced driving assistance system Lane line detection results.
- the detection unit may include: a neural network for semantically segmenting an image including the driving environment of the vehicle, and outputting a lane line probability map; the lane line probability map is used to indicate that at least one pixel point in the image belongs to The probability value of the lane line; the determination subunit is used to determine the area where the lane line is located according to the lane line probability map; the detection result of the lane line includes the area where the lane line is located.
- a neural network is used to: extract features from an image through a neural network to obtain a feature map; and perform a semantic segmentation of the feature map through a neural network to obtain a lane line probability map of N lane lines; each The pixel value of each pixel point in the lane line probability map of the lane represents the probability value that the corresponding pixel point in the image belongs to the lane line, and the value of N is an integer greater than 0.
- the neural network performs semantic segmentation on the feature map to obtain the lane line probability map of N lane lines, which is used to: semantically segment the feature map through the neural network to obtain the probability map of N + 1 channels; N + 1
- the lanes correspond to the N lane lines and the background, respectively; and the lane line probability maps of the N lane lines are obtained from the probability maps of the N + 1 lanes.
- the value of N is 2, and N + 1 channels correspond to the background, left lane, and right lane lines; or, the value of N is 3, and N + 1 channels correspond to Background, left lane line, middle lane line, and right lane line; or, the value of N is 4, and N + 1 channels correspond to the background, left and left lane lines, left lane line, right lane line, and right and right lane line, respectively .
- the determining subunit is used to: select a pixel point with a probability value greater than a first preset threshold value from the lane line probability map of the lane line; perform the maximum in the lane line probability map based on the selected pixel point Connected domain search to find the pixel point set belonging to the lane line; and determining the area where the lane line is located based on the pixel point set belonging to the lane line.
- the determination subunit determines the area where the lane line is located based on the set of pixels belonging to the lane line, it is used to: calculate the sum of the probability values of all pixels in the set of pixels belonging to the lane line to obtain the confidence level of the lane line; The degree is greater than the second preset threshold, and the area formed by the pixel set is used as the area where the lane line is located.
- FIG. 6 is a schematic structural diagram of an embodiment of an intelligent driving control device based on lane lines of the present disclosure.
- the lane driving-based intelligent driving control device in this embodiment further includes a pre-processing module for pre-processing the original image including the driving environment of the vehicle.
- the neural network when the neural network performs semantic segmentation on the image including the driving environment of the vehicle, it is used to perform semantic segmentation on the preprocessed image.
- the determination module may include: a fitting processing unit, configured to perform curve fitting on the pixel points in the area where each lane line is located to obtain lane line information of each lane line; the lane line information includes A distance from at least one point on the lane line to the vehicle; a determining unit, configured to determine an estimated distance of the vehicle from the lane line and / or an estimated time of the vehicle from the lane line according to the driving state of the vehicle and the lane line information of the lane line.
- a fitting processing unit is used to: select multiple pixels from the area where a lane line is located; convert multiple pixels from the camera coordinate system where the camera is located to the world coordinate system to obtain multiple The coordinates of the pixel points in the world coordinate system; and according to the coordinates of the multiple pixel points in the world coordinate system, curve fitting is performed on the multiple pixel points in the world coordinate system to obtain lane line information of the lane line.
- the determining module may further include: a filtering unit, configured to filter parameters in the lane line information of the lane line.
- the determining unit is configured to determine the estimated distance of the vehicle from the lane line and / or the estimated time of the vehicle from the lane line according to the driving state of the vehicle and the lane line information obtained by the filtering.
- the filtering unit is configured to determine the parameter values in the lane line information based on the parameter values of the parameters in the lane line information and the parameter values in the historical lane line information of the lane lines obtained based on the previous frame image.
- the parameter value is Kalman filtered; the previous frame image is a frame image in which the detection timing is located before the image in the video where the image is located.
- the determining module may further include: a selecting unit configured to select a parameter value of the parameter in the lane line information relative to the parameter value of the corresponding parameter in the historical lane line information, and the lane line information The difference between the parameter value of the middle parameter and the parameter value of the corresponding parameter in the historical lane line information is less than the third preset threshold lane line information, and the Kalman filtering is performed as valid lane line information.
- a selecting unit configured to select a parameter value of the parameter in the lane line information relative to the parameter value of the corresponding parameter in the historical lane line information, and the lane line information The difference between the parameter value of the middle parameter and the parameter value of the corresponding parameter in the historical lane line information is less than the third preset threshold lane line information, and the Kalman filtering is performed as valid lane line information.
- the determining module determines the estimated distance of the vehicle from the lane line according to the driving state of the vehicle and the lane line detection result, and is used to determine the position of the vehicle in the world coordinate system and the lane line information of the lane line.
- the driving state of the vehicle includes the vehicle's position in the world coordinate system.
- the determination module determines the estimated event of the vehicle exiting the lane line according to the driving state of the vehicle and the lane line detection result, and is used for determining the speed of the vehicle, the position of the vehicle in the world coordinate system, and the lane line.
- the lane line information is used to determine the estimated time for the vehicle to drive out of the lane line; the driving state of the vehicle includes the speed of the vehicle and the position of the vehicle in the world coordinate system.
- control module may include a comparison unit configured to compare the estimated distance and / or estimated time with at least a predetermined threshold; and a control unit configured to satisfy one or more of the comparison results For each preset condition, intelligent driving control corresponding to the preset condition that is satisfied is performed; intelligent driving control includes: automatic driving control and / or assisted driving control.
- the intelligent driving control performed on the vehicle may include, but is not limited to, controlling at least one of the following: automatic driving control, assisted driving control, and the like.
- the automatic driving control of the vehicle may include, but is not limited to, performing any one or more of the following controls on the vehicle: braking, deceleration, changing the driving direction, lane line keeping, driving mode switching control, etc. to control the vehicle Operation in driving state.
- the assisted driving control of the vehicle may include, but is not limited to, performing any one or more of the following controls on the vehicle: warning of lane line deviation, prompting of lane line keeping, etc., which help prompt the driver to control the driving state of the vehicle Operation.
- the degree of intelligent driving control corresponding to each of the multiple preset conditions may be gradually increased.
- control unit is configured to: if the estimated distance is less than or equal to the fourth preset threshold and greater than the fifth preset threshold, prompt a lane line deviation for the vehicle; or, if the estimated time is less than or equal to the sixth A lane threshold deviation reminder is provided for a preset threshold and greater than the seventh preset threshold; or, if the estimated distance is less than or equal to the fourth preset threshold and greater than the fifth preset threshold, and the estimated time is less than or equal to the sixth
- the preset threshold value is greater than the seventh preset threshold value, and a lane line deviation prompt is provided to the vehicle.
- the lane line departure warning includes a lane line departure warning; the fifth preset threshold is smaller than the fourth preset threshold, and the seventh preset threshold is smaller than the sixth preset threshold.
- control unit may be further configured to: if the estimated distance is less than or equal to the fifth preset threshold, perform automatic driving control and / or a lane line departure warning on the vehicle; or, if the estimated time is less than or equal to the seventh preset threshold Set a threshold to perform automatic driving control and / or lane departure warning on the vehicle; or, if the estimated distance is less than or equal to the fifth preset threshold and the estimated time is less than or equal to the seventh preset threshold, perform automatic driving control / Or lane departure warning.
- the lane line departure warning includes a lane line departure warning.
- control unit may be further configured to: if the estimated distance is less than or equal to the fifth preset threshold, perform automatic driving control on the vehicle and / or the lane line deviation alarm, if the image-based and historical frame images are used; The determined estimated distances are all less than or equal to a fifth preset threshold, and the vehicle is subjected to automatic driving control and / or lane line departure warning; the historical frame image includes at least one frame in the video where the detection sequence is located before the image; or, If the estimated time is less than or equal to the seventh preset threshold, when the vehicle is under automatic driving control and / or the lane line departure alarm is used for: if the estimated time determined based on the image and the historical frame image are less than or equal to the seventh preset threshold Thresholds, auto-driving control and / or lane departure warning of the vehicle; or if the estimated distance is less than or equal to the fifth preset threshold and the estimated time is less than or equal to the seventh preset threshold, the vehicle is subject to auto-
- An embodiment of the present disclosure further provides an electronic device including a lane line-based intelligent driving control device according to any one of the foregoing embodiments of the present disclosure.
- the embodiment of the present disclosure further provides another electronic device, including: a memory for storing executable instructions; and a processor for communicating with the memory to execute the executable instructions to complete the lane-based system according to any one of the embodiments of the present disclosure.
- a memory for storing executable instructions
- a processor for communicating with the memory to execute the executable instructions to complete the lane-based system according to any one of the embodiments of the present disclosure.
- the operation of the intelligent driving control method of the line including: a memory for storing executable instructions; and a processor for communicating with the memory to execute the executable instructions to complete the lane-based system according to any one of the embodiments of the present disclosure.
- FIG. 7 is a schematic structural diagram of an application embodiment of an electronic device of the present disclosure.
- the electronic device includes one or more processors, a communication unit, and the like.
- the one or more processors are, for example, one or more central processing units (CPUs), and / or one or more images.
- the processor may perform various appropriate actions and processes according to executable instructions stored in a read-only memory (ROM) or executable instructions loaded from a storage portion into a random access memory (RAM) .
- the communication unit may include, but is not limited to, a network card.
- the network card may include, but is not limited to, an IB (Infiniband) network card.
- the processor may communicate with a read-only memory and / or a random access memory to execute executable instructions, and is connected to the communication unit through a bus. And communicate with other target devices via the communication unit, thereby completing operations corresponding to any of the lane line-based intelligent driving control methods provided in the embodiments of the present disclosure, for example, obtaining a lane line detection result of a vehicle driving environment; according to the vehicle ’s A driving state and a lane line detection result to determine an estimated distance for the vehicle to exit the lane line and / or an estimated time for the vehicle to exit the lane line; based on the estimated distance and / or the estimated time, Perform intelligent driving control on the vehicle.
- various programs and data required for the operation of the device can be stored in the RAM.
- the CPU, ROM, and RAM are connected to each other through a bus.
- ROM is an optional module.
- the RAM stores executable instructions, or writes executable instructions to ROM at runtime, and the executable instructions cause the processor to perform operations corresponding to any of the lane line-based intelligent driving control methods of the present disclosure.
- An input / output (I / O) interface is also connected to the bus.
- the communication unit can be integrated or set to have multiple sub-modules (for example, multiple IB network cards) and be on the bus link.
- the following components are connected to the I / O interface: including input parts such as keyboard, mouse, etc .; including output parts such as cathode ray tube (CRT), liquid crystal display (LCD), etc .; speakers; storage parts including hard disks; etc .; LAN card, modem, and other network interface card communication part.
- the communication section performs communication processing via a network such as the Internet.
- the drive is also connected to the I / O interface as required. Removable media, such as magnetic disks, optical disks, magneto-optical disks, semiconductor memories, etc., are installed on the drive as needed, so that a computer program read therefrom is installed into the storage section as needed.
- FIG. 7 is only an optional implementation manner. In practice, the number and types of the components in FIG. 7 may be selected, deleted, added, or replaced according to actual needs. Functional settings can also be implemented by separate settings or integrated settings. For example, the GPU and CPU can be set separately or the GPU can be integrated on the CPU. The communication department can be set separately or integrated on the CPU or GPU. Wait. These alternative embodiments all fall within the protection scope of the present disclosure.
- an embodiment of the present disclosure also provides a computer storage medium for storing computer-readable instructions that, when executed, implement operations of the lane line-based intelligent driving control method of any of the foregoing embodiments of the present disclosure.
- an embodiment of the present disclosure also provides a computer program including computer-readable instructions.
- a processor in the device executes the instructions to implement any of the foregoing in the present disclosure.
- the methods and apparatus of the present disclosure may be implemented in many ways.
- the methods and apparatuses of the present disclosure may be implemented by software, hardware, firmware or any combination of software, hardware, firmware.
- the above order of the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order described above unless specifically stated otherwise.
- the present disclosure may also be implemented as programs recorded in a recording medium, which programs include machine-readable instructions for implementing a method according to the present disclosure.
- the present disclosure also covers a recording medium storing a program for executing a method according to the present disclosure.
Abstract
Description
Claims (49)
- 一种基于车道线的智能驾驶控制方法,其特征在于,包括:获取车辆行驶环境的车道线检测结果;根据所述车辆的行驶状态和车道线检测结果,确定所述车辆驶出所述车道线的估计距离和/或所述车辆驶出所述车道线的估计时间;根据所述估计距离和/或所述估计时间,对所述车辆进行智能驾驶控制。
- 根据权利要求1所述的方法,其特征在于,所述获取车辆行驶环境的车道线检测结果,包括:基于神经网络检测所述车辆行驶环境的车道线,得到所述车道线检测结果;或者,从高级驾驶辅助系统获取所述车辆行驶环境的车道线检测结果。
- 根据权利要求2所述的方法,其特征在于,所述基于神经网络检测所述车辆行驶环境的车道线,得到所述车道线检测结果,包括:通过神经网络对包括所述车辆行驶环境的图像进行语义分割,输出车道线概率图;所述车道线概率图用于表示所述图像中的至少一个像素点分别属于车道线的概率值;根据所述车道线概率图确定车道线所在区域;所述车道线检测结果包括所述车道线所在区域。
- 根据权利要求3所述的方法,其特征在于,所述通过神经网络对包括所述车辆行驶环境的图像进行语义分割,输出车道线概率图,包括:通过所述神经网络对所述图像进行特征提取,得到特征图;通过所述神经网络对所述特征图进行语义分割,得到N条车道线的车道线概率图;每条车道的车道线概率图中各像素点的像素值表示所述图像中对应像素点分别属于该条车道线的概率值,N的取值为大于0的整数。
- 根据权利要求4所述的方法,其特征在于,所述通过所述神经网络对所述特征图进行语义分割,得到N条车道线的车道线概率图,包括:通过所述神经网络对所述特征图进行语义分割,得到N+1个通道的概率图;所述N+1个通道分别对应于N条车道线和背景;从所述N+1个通道的概率图中获取所述N条车道线的车道线概率图。
- 根据权利要求4或5所述的方法,其特征在于,N的取值为2,所述N+1个通道分别对应于背景、左车道线和右车道线;或者,N的取值为3,所述N+1个通道分别对应于背景、左车道线、中车道线和右车道线;或者,N的取值为4,所述N+1个通道分别对应于背景、左左车道线、左车道线、右车道线和右右车道线。
- 根据权利要求4-6任一所述的方法,其特征在于,根据一条车道线的车道线概率图确定所述车道线所在区域,包括:从车道线的车道线概率图中选取概率值大于第一预设阈值的像素点;基于选取出的像素点在所述车道线概率图中进行最大连通域查找,找出属于所述车道线的像素点集合;基于属于所述车道线的像素点集合确定所述车道线所在区域。
- 根据权利要求7所述的方法,其特征在于,所述基于属于所述车道线的像素点集合确定所述车道线所在区域,包括:统计属于所述车道线的像素点集合中所有像素点的概率值之和,得到所述车道线的置信度;若所述置信度大于第二预设阈值,以所述像素点集合形成的区域作为所述车道线所在 区域。
- 根据权利要求3-8任一所述的方法,其特征在于,还包括:对包括所述车辆行驶环境的原始图像进行预处理;所述通过神经网络对包括所述车辆行驶环境的图像进行语义分割,包括:通过所述神经网络,对预处理得到的所述图像进行语义分割。
- 根据权利要求3-9任一所述的方法,其特征在于,所述根据所述车辆的行驶状态和车道线检测结果,确定所述车辆驶出所述车道线的估计距离和/或所述车辆驶出所述车道线的估计时间,包括:分别对每条所述车道线所在区域中的像素点进行曲线拟合,得到每条所述车道线的车道线信息;所述车道线信息包括所述车道线上至少一点到所述车辆的距离;根据所述车辆的行驶状态和所述车道线的车道线信息,确定所述车辆驶出所述车道线的估计距离和/或所述车辆驶出所述车道线的估计时间。
- 根据权利要求10所述的方法,其特征在于,所述对所述车道线所在区域中的像素点进行曲线拟合,得到所述车道线的车道线信息,包括:从一条所述车道线所在区域中选取多个像素点;将所述多个像素点从所述摄像头所在的相机坐标系转换到世界坐标系中,得到所述多个像素点在世界坐标系中的坐标;根据所述多个像素点在世界坐标系中的坐标,在世界坐标系中对所述多个像素点进行曲线拟合,得到所述车道线的车道线信息。
- 根据权利要求10或11所述的方法,其特征在于,所述得到所述车道线的车道线信息之后,还包括:对所述车道线的车道线信息中的参数进行滤波;所述根据所述车辆的行驶状态和所述车道线的车道线信息,确定所述车辆驶出所述车道线的估计距离和/或所述车辆驶出所述车道线的估计时间,包括:根据所述车辆的行驶状态和滤波得到的所述车道线的车道线信息,确定所述车辆驶出所述车道线的估计距离和/或所述车辆驶出所述车道线的估计时间。
- 根据权利要求12所述的方法,其特征在于,所述对所述车道线的车道线信息中的参数进行滤波,包括:根据所述车道线信息中参数的参数值与基于上一帧图像获得的所述车道线的历史车道线信息中参数的参数值,对所述车道线信息中参数的参数值进行卡尔曼滤波;所述上一帧图像为所述图像所在视频中检测时序位于所述图像之前的一帧图像。
- 根据权利要求13所述的方法,其特征在于,所述对所述车道线信息中参数的参数值进行卡尔曼滤波之前,还包括:选取所述车道线信息中参数的参数值相对于所述历史车道线信息中对应参数的参数值有变化、且所述车道线信息中参数的参数值与所述历史车道线信息中对应参数的参数值之间的差值小于第三预设阈值的所述车道线信息,以作为有效的车道线信息进行卡尔曼滤波。
- 根据权利要求10-14任一所述的方法,其特征在于,所述根据所述车辆的行驶状态和车道线检测结果,确定所述车辆驶出所述车道线的估计距离,包括:根据所述车辆在世界坐标系中的位置、以及所述车道线的车道线信息,确定所述车辆与所述车道线之间的估计距离;所述车辆的行驶状态包括所述车辆在世界坐标系中的位置。
- 根据权利要求10-14任一所述的方法,其特征在于,所述根据所述车辆的行驶状态和车道线检测结果,确定所述车辆驶出所述车道线的估计时间,包括:根据所述车辆的速度和所述车辆在世界坐标系中的位置、以及所述车道线的车道线信息,确定所述车辆驶出所述车道线的估计时间;所述车辆的行驶状态包括所述车辆的速度 和所述车辆在世界坐标系中的位置。
- 根据权利要求1-16任一所述的方法,其特征在于,所述根据所述估计距离和/或所述估计时间,对所述车辆进行智能驾驶控制,包括:将所述估计距离和/或所述估计时间与至少一预定阈值进行比较;在比较结果满足一个或多个预设条件时,进行所满足的预设条件相应的智能驾驶控制;所述智能驾驶控制包括:自动驾驶控制和/或辅助驾驶控制。
- 根据权利要求17所述的方法,其特征在于,所述自动驾驶控制包括以下任意一项或多项:制动、减速、改变行驶方向、车道线保持、驾驶模式切换控制。
- 根据权利要求18所述的方法,其特征在于,所述对所述车辆进行辅助驾驶控制包括:进行车道线偏离预警;或者,进行车道线保持提示。
- 根据权利要求17-19任一所述的方法,其特征在于,在所述预设条件包括多个时,多个预设条件分别对应的智能驾驶控制的程度逐级递增。
- 根据权利要求20所述的方法,其特征在于,所述在比较结果满足一个或多个预设条件时,进行所满足的预设条件相应的智能驾驶控制,包括:若所述估计距离小于或等于第四预设阈值、且大于第五预设阈值,对所述车辆进行车道线偏离提示;或者,若所述估计时间小于或等于第六预设阈值、且大于第七预设阈值,对所述车辆进行车道线偏离提示;或者,若所述估计距离小于或等于第四预设阈值、且大于第五预设阈值,且所述估计时间小于或等于第六预设阈值、且大于第七预设阈值,对所述车辆进行车道线偏离提示;其中,所述车道线偏离预警包括所述车道线偏离提示;所述第五预设阈值小于所述第四预设阈值,所述第七预设阈值小于所述第六预设阈值。
- 根据权利要求21所述的方法,其特征在于,所述在比较结果满足一个或多个预设条件时,进行所满足的预设条件相应的智能驾驶控制,还包括:若所述估计距离小于或等于所述第五预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警;或者,若所述估计时间小于或等于所述第七预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警;或者,若所述估计距离小于或等于所述第五预设阈值,且所述估计时间小于或等于所述第七预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警;其中,所述车道线偏离预警包括所述车道线偏离报警。
- 根据权利要求22所述的方法,其特征在于,所述若所述估计距离小于或等于所述第五预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警,包括:若基于所述图像以及历史帧图像确定出的所述估计距离均小于或等于所述第五预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警;所述历史帧图像包括所述图像所在视频中检测时序位于所述图像之前的至少一帧图像;或者,所述若所述估计时间小于或等于所述第七预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警,包括:若基于所述图像以及历史帧图像确定出的所述估计时间均小于或等于所述第七预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警;或者,所述若所述估计距离小于或等于所述第五预设阈值,且所述估计时间小于或等于所述第七预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警,包括:若基于所述图像以及历史帧图像确定出的所述估计距离均小于或等于所述第五预设阈值、且基于所述图像以及历史帧图像确定出的所述估计时间均小于或等于所述第七预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警。
- 一种基于车道线的智能驾驶控制装置,其特征在于,包括:获取模块,用于获取车辆行驶环境的车道线检测结果;确定模块,用于根据所述车辆的行驶状态和车道线检测结果,确定所述车辆驶出所述车道线的估计距离和/或所述车辆驶出所述车道线的估计时间;控制模块,用于根据所述估计距离和/或所述估计时间,对所述车辆进行智能驾驶控制。
- 根据权利要求24所述的装置,其特征在于,所述获取模块包括:检测单元,用于基于神经网络检测所述车辆行驶环境的车道线,得到所述车道线检测结果;或者,获取单元,用于从高级驾驶辅助系统获取所述车辆行驶环境的车道线检测结果。
- 根据权利要求25所述的装置,其特征在于,所述检测单元包括:神经网络,用于对包括所述车辆行驶环境的图像进行语义分割,输出车道线概率图;所述车道线概率图用于表示所述图像中的至少一个像素点分别属于车道线的概率值;确定子单元,用于根据所述车道线概率图确定车道线所在区域;所述车道线检测结果包括所述车道线所在区域。
- 根据权利要求26所述的装置,其特征在于,所述神经网络用于:通过所述神经网络对所述图像进行特征提取,得到特征图;以及通过所述神经网络对所述特征图进行语义分割,得到N条车道线的车道线概率图;每条车道的车道线概率图中各像素点的像素值表示所述图像中对应像素点分别属于该条车道线的概率值,N的取值为大于0的整数。
- 根据权利要求27所述的装置,其特征在于,所述神经网络对所述特征图进行语义分割,得到N条车道线的车道线概率图时,用于:通过所述神经网络对所述特征图进行语义分割,得到N+1个通道的概率图;所述N+1个通道分别对应于N条车道线和背景;以及从所述N+1个通道的概率图中获取所述N条车道线的车道线概率图。
- 根据权利要求27或28所述的装置,其特征在于,N的取值为2,所述N+1个通道分别对应于背景、左车道线和右车道线;或者,N的取值为3,所述N+1个通道分别对应于背景、左车道线、中车道线和右车道线;或者,N的取值为4,所述N+1个通道分别对应于背景、左左车道线、左车道线、右车道线和右右车道线。
- 根据权利要求27-29任一所述的装置,其特征在于,所述确定子单元用于:从车道线的车道线概率图中选取概率值大于第一预设阈值的像素点;基于选取出的像素点在所述车道线概率图中进行最大连通域查找,找出属于所述车道线的像素点集合;以及基于属于所述车道线的像素点集合确定所述车道线所在区域。
- 根据权利要求30所述的装置,其特征在于,所述确定子单元基于属于所述车道线的像素点集合确定所述车道线所在区域时,用于:统计属于所述车道线的像素点集合中所有像素点的概率值之和,得到所述车道线的置信度;若所述置信度大于第二预设阈值,以所述像素点集合形成的区域作为所述车道线所在区域。
- 根据权利要求26-31任一所述的装置,其特征在于,还包括:预处理模块,用于对包括所述车辆行驶环境的原始图像进行预处理;所述神经网络对包括所述车辆行驶环境的图像进行语义分割时,用于对预处理得到的所述图像进行语义分割。
- 根据权利要求26-32任一所述的装置,其特征在于,所述确定模块包括:拟合处理单元,用于分别对每条所述车道线所在区域中的像素点进行曲线拟合,得到每条所述车道线的车道线信息;所述车道线信息包括所述车道线上至少一点到所述车辆的距离;确定单元,用于根据所述车辆的行驶状态和所述车道线的车道线信息,确定所述车辆驶出所述车道线的估计距离和/或所述车辆驶出所述车道线的估计时间。
- 根据权利要求33所述的装置,其特征在于,所述拟合处理单元,用于:从一条所述车道线所在区域中选取多个像素点;将所述多个像素点从所述摄像头所在的相机坐标系转换到世界坐标系中,得到所述多个像素点在世界坐标系中的坐标;以及根据所述多个像素点在世界坐标系中的坐标,在世界坐标系中对所述多个像素点进行曲线拟合,得到所述车道线的车道线信息。
- 根据权利要求33或34所述的装置,其特征在于,所述确定模块还包括:滤波单元,用于对所述车道线的车道线信息中的参数进行滤波;所述确定单元用于:根据所述车辆的行驶状态和滤波得到的所述车道线的车道线信息,确定所述车辆驶出所述车道线的估计距离和/或所述车辆驶出所述车道线的估计时间。
- 根据权利要求35所述的装置,其特征在于,所述滤波单元,用于根据所述车道线信息中参数的参数值与基于上一帧图像获得的所述车道线的历史车道线信息中参数的参数值,对所述车道线信息中参数的参数值进行卡尔曼滤波;所述上一帧图像为所述图像所在视频中检测时序位于所述图像之前的一帧图像。
- 根据权利要求36所述的装置,其特征在于,所述确定模块还包括:选取单元,用于选取所述车道线信息中参数的参数值相对于所述历史车道线信息中对应参数的参数值有变化、且所述车道线信息中参数的参数值与所述历史车道线信息中对应参数的参数值之间的差值小于第三预设阈值的所述车道线信息,以作为有效的车道线信息进行卡尔曼滤波。
- 根据权利要求33-37任一所述的装置,其特征在于,所述确定模块根据所述车辆的行驶状态和车道线检测结果,确定所述车辆驶出所述车道线的估计距离时,用于根据所述车辆在世界坐标系中的位置、以及所述车道线的车道线信息,确定所述车辆与所述车道线之间的估计距离;所述车辆的行驶状态包括所述车辆在世界坐标系中的位置。
- 根据权利要求33-37任一所述的装置,其特征在于,所述确定模块根据所述车辆的行驶状态和车道线检测结果,确定所述车辆驶出所述车道线的估计事件时,用于根据所述车辆的速度和所述车辆在世界坐标系中的位置、以及所述车道线的车道线信息,确定所述车辆驶出所述车道线的估计时间;所述车辆的行驶状态包括所述车辆的速度和所述车辆在世界坐标系中的位置。
- 根据权利要求24-39任一所述的装置,其特征在于,所述控制模块包括:比较单元,用于将所述估计距离和/或所述估计时间与至少一预定阈值进行比较;控制单元,用于在比较结果满足一个或多个预设条件时,进行所满足的预设条件相应的智能驾驶控制;所述智能驾驶控制包括:自动驾驶控制和/或辅助驾驶控制。
- 根据权利要求40所述的装置,其特征在于,所述自动驾驶控制包括以下任意一项或多项:制动、减速、改变行驶方向、车道线保持、驾驶模式切换控制。
- 根据权利要求41所述的装置,其特征在于,所述控制单元对所述车辆进行辅助驾驶控制时,用于进行车道线偏离预警;或者,进行车道线保持提示。
- 根据权利要求40-42任一所述的装置,其特征在于,在所述预设条件包括多个时,多个预设条件分别对应的智能驾驶控制的程度逐级递增。
- 根据权利要求43所述的装置,其特征在于,所述控制单元,用于:若所述估计距离小于或等于第四预设阈值、且大于第五预设阈值,对所述车辆进行车道线偏离提示;或者,若所述估计时间小于或等于第六预设阈值、且大于第七预设阈值,对所述车辆进行车道线偏离提示;或者,若所述估计距离小于或等于第四预设阈值、且大于第五预设阈值,且所述估计时间小于或等于第六预设阈值、且大于第七预设阈值,对所述车辆进行车道线偏离提示;其中,所述车道线偏离预警包括所述车道线偏离提示;所述第五预设阈值小于所述第 四预设阈值,所述第七预设阈值小于所述第六预设阈值。
- 根据权利要求44所述的装置,其特征在于,所述控制单元,还用于:若所述估计距离小于或等于所述第五预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警;或者,若所述估计时间小于或等于所述第七预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警;或者,若所述估计距离小于或等于所述第五预设阈值,且所述估计时间小于或等于所述第七预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警;其中,所述车道线偏离预警包括所述车道线偏离报警。
- 根据权利要求45所述的装置,其特征在于,所述控制单元:若所述估计距离小于或等于所述第五预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警时,用于若基于所述图像以及历史帧图像确定出的所述估计距离均小于或等于所述第五预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警;所述历史帧图像包括所述图像所在视频中检测时序位于所述图像之前的至少一帧图像;或者,若所述估计时间小于或等于所述第七预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警时,用于:若基于所述图像以及历史帧图像确定出的所述估计时间均小于或等于所述第七预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警;或者,若所述估计距离小于或等于所述第五预设阈值,且所述估计时间小于或等于所述第七预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警时,用于:若基于所述图像以及历史帧图像确定出的所述估计距离均小于或等于所述第五预设阈值、且基于所述图像以及历史帧图像确定出的所述估计时间均小于或等于所述第七预设阈值,对所述车辆进行自动驾驶控制和/或车道线偏离报警。
- 一种电子设备,其特征在于,包括:存储器,用于存储计算机程序;处理器,用于执行所述存储器中存储的计算机程序,且所述计算机程序被执行时,实现上述权利要求1-23任一所述的方法。
- 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该计算机程序被处理器执行时,实现上述权利要求1-23任一所述的方法。
- 一种计算机程序,包括计算机指令,其特征在于,当所述计算机指令在设备的处理器中运行时,实现上述权利要求1-23任一所述的方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020554361A JP7024115B2 (ja) | 2018-05-31 | 2019-05-20 | 区画線に基づくインテリジェントドライブ制御方法および装置、ならびに電子機器 |
SG11202005094XA SG11202005094XA (en) | 2018-05-31 | 2019-05-20 | Lane line-based intelligent driving control method and apparatus, and electronic device |
US16/886,163 US11314973B2 (en) | 2018-05-31 | 2020-05-28 | Lane line-based intelligent driving control method and apparatus, and electronic device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810551908.XA CN108875603B (zh) | 2018-05-31 | 2018-05-31 | 基于车道线的智能驾驶控制方法和装置、电子设备 |
CN201810551908.X | 2018-05-31 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/886,163 Continuation US11314973B2 (en) | 2018-05-31 | 2020-05-28 | Lane line-based intelligent driving control method and apparatus, and electronic device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019228211A1 true WO2019228211A1 (zh) | 2019-12-05 |
Family
ID=64335045
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/087622 WO2019228211A1 (zh) | 2018-05-31 | 2019-05-20 | 基于车道线的智能驾驶控制方法和装置、电子设备 |
Country Status (5)
Country | Link |
---|---|
US (1) | US11314973B2 (zh) |
JP (1) | JP7024115B2 (zh) |
CN (1) | CN108875603B (zh) |
SG (1) | SG11202005094XA (zh) |
WO (1) | WO2019228211A1 (zh) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111160370A (zh) * | 2019-12-27 | 2020-05-15 | 深圳佑驾创新科技有限公司 | 车头位置估计方法、装置、计算机设备和存储介质 |
CN111721316A (zh) * | 2020-06-22 | 2020-09-29 | 重庆大学 | 一种高性能的车道线识别感兴趣区域预测方法 |
CN112287842A (zh) * | 2020-10-29 | 2021-01-29 | 恒大新能源汽车投资控股集团有限公司 | 一种车道线的识别方法、装置及电子设备 |
CN112364822A (zh) * | 2020-11-30 | 2021-02-12 | 重庆电子工程职业学院 | 一种自动驾驶视频语义分割系统及方法 |
CN112906665A (zh) * | 2021-04-06 | 2021-06-04 | 北京车和家信息技术有限公司 | 交通标线融合方法、装置、存储介质及电子设备 |
CN113657265A (zh) * | 2021-08-16 | 2021-11-16 | 长安大学 | 一种车辆距离探测方法、系统、设备及介质 |
US11318958B2 (en) * | 2020-11-30 | 2022-05-03 | Beijing Baidu Netcom Science Technology Co., Ltd. | Vehicle driving control method, apparatus, vehicle, electronic device and storage medium |
CN114565681A (zh) * | 2022-03-01 | 2022-05-31 | 禾多科技(北京)有限公司 | 一种相机标定方法、装置、设备、介质及产品 |
CN114743178A (zh) * | 2021-12-29 | 2022-07-12 | 北京百度网讯科技有限公司 | 道路边缘线生成方法、装置、设备及存储介质 |
EP4202759A4 (en) * | 2020-09-09 | 2023-10-25 | Huawei Technologies Co., Ltd. | TRAFFIC LANE LINE DETECTION METHOD, ASSOCIATED DEVICE AND COMPUTER READABLE STORAGE MEDIUM |
Families Citing this family (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108875603B (zh) * | 2018-05-31 | 2021-06-04 | 上海商汤智能科技有限公司 | 基于车道线的智能驾驶控制方法和装置、电子设备 |
CN109147368A (zh) * | 2018-08-22 | 2019-01-04 | 北京市商汤科技开发有限公司 | 基于车道线的智能驾驶控制方法装置与电子设备 |
CN110858405A (zh) * | 2018-08-24 | 2020-03-03 | 北京市商汤科技开发有限公司 | 车载摄像头的姿态估计方法、装置和系统及电子设备 |
KR102633140B1 (ko) * | 2018-10-23 | 2024-02-05 | 삼성전자주식회사 | 주행 정보를 결정하는 방법 및 장치 |
CN111209777A (zh) * | 2018-11-21 | 2020-05-29 | 北京市商汤科技开发有限公司 | 车道线检测方法、装置、电子设备及可读存储介质 |
JP6852141B2 (ja) * | 2018-11-29 | 2021-03-31 | キヤノン株式会社 | 情報処理装置、撮像装置、情報処理装置の制御方法、および、プログラム |
CN109298719B (zh) * | 2018-12-04 | 2021-11-02 | 奇瑞汽车股份有限公司 | 智能汽车的接管方法、装置及存储介质 |
CN109582019B (zh) * | 2018-12-04 | 2021-06-29 | 奇瑞汽车股份有限公司 | 智能汽车变道失效时的接管方法、装置及存储介质 |
CN109472251B (zh) * | 2018-12-16 | 2022-04-05 | 华为技术有限公司 | 一种物体碰撞预测方法及装置 |
CN111316337A (zh) * | 2018-12-26 | 2020-06-19 | 深圳市大疆创新科技有限公司 | 车载成像装置的安装参数的确定与驾驶控制方法及设备 |
CN109598943A (zh) * | 2018-12-30 | 2019-04-09 | 北京旷视科技有限公司 | 车辆违章的监控方法、装置及系统 |
CN111460866B (zh) * | 2019-01-22 | 2023-12-22 | 北京市商汤科技开发有限公司 | 车道线检测及驾驶控制方法、装置和电子设备 |
CN111476062A (zh) * | 2019-01-23 | 2020-07-31 | 北京市商汤科技开发有限公司 | 车道线检测方法、装置、电子设备及驾驶系统 |
CN111476057B (zh) * | 2019-01-23 | 2024-03-26 | 北京市商汤科技开发有限公司 | 车道线获取方法及装置、车辆驾驶方法及装置 |
CN109866684B (zh) * | 2019-03-15 | 2021-06-22 | 江西江铃集团新能源汽车有限公司 | 车道偏离预警方法、系统、可读存储介质及计算机设备 |
CN112131914B (zh) * | 2019-06-25 | 2022-10-21 | 北京市商汤科技开发有限公司 | 车道线属性检测方法、装置、电子设备及智能设备 |
CN110781768A (zh) * | 2019-09-30 | 2020-02-11 | 奇点汽车研发中心有限公司 | 目标对象检测方法和装置、电子设备和介质 |
CN110706374B (zh) * | 2019-10-10 | 2021-06-29 | 南京地平线机器人技术有限公司 | 运动状态预测方法、装置、电子设备及车辆 |
CN111091096B (zh) * | 2019-12-20 | 2023-07-11 | 江苏中天安驰科技有限公司 | 车辆偏离决策方法、装置、存储介质及车辆 |
CN111079695B (zh) * | 2019-12-30 | 2021-06-01 | 北京华宇信息技术有限公司 | 一种人体关键点检测与自学习方法及装置 |
CN111257005B (zh) * | 2020-01-21 | 2022-11-01 | 北京百度网讯科技有限公司 | 用于测试自动驾驶车辆的方法、装置、设备和存储介质 |
CN111401446A (zh) * | 2020-03-16 | 2020-07-10 | 重庆长安汽车股份有限公司 | 单传感器、多传感器车道线合理性检测方法、系统及车辆 |
US20220009494A1 (en) * | 2020-07-07 | 2022-01-13 | Honda Motor Co., Ltd. | Control device, control method, and vehicle |
CN111814746A (zh) * | 2020-08-07 | 2020-10-23 | 平安科技(深圳)有限公司 | 一种识别车道线的方法、装置、设备及存储介质 |
CN112115857B (zh) * | 2020-09-17 | 2024-03-01 | 福建牧月科技有限公司 | 智能汽车的车道线识别方法、装置、电子设备及介质 |
CN112172829B (zh) * | 2020-10-23 | 2022-05-17 | 科大讯飞股份有限公司 | 车道偏离预警方法、装置、电子设备和存储介质 |
CN114612736A (zh) * | 2020-12-08 | 2022-06-10 | 广州汽车集团股份有限公司 | 一种车道线检测方法、系统及计算机可读介质 |
CN114620059A (zh) * | 2020-12-14 | 2022-06-14 | 广州汽车集团股份有限公司 | 一种自动驾驶方法及其系统、计算机可读存储介质 |
JP7048833B1 (ja) * | 2020-12-28 | 2022-04-05 | 本田技研工業株式会社 | 車両制御装置、車両制御方法、およびプログラム |
CN112766133A (zh) * | 2021-01-14 | 2021-05-07 | 金陵科技学院 | 一种基于ReliefF-DBN的自动驾驶偏离处理方法 |
CN113053124B (zh) * | 2021-03-25 | 2022-03-15 | 英博超算(南京)科技有限公司 | 一种智能车辆的测距系统 |
CN113188509B (zh) * | 2021-04-28 | 2023-10-24 | 上海商汤临港智能科技有限公司 | 一种测距方法、装置、电子设备及存储介质 |
CN113255506B (zh) * | 2021-05-20 | 2022-10-18 | 浙江合众新能源汽车有限公司 | 动态车道线控制方法、系统、设备和计算机可读介质 |
CN113609980A (zh) * | 2021-08-04 | 2021-11-05 | 东风悦享科技有限公司 | 一种用于自动驾驶车辆的车道线感知方法及装置 |
CN113706705B (zh) * | 2021-09-03 | 2023-09-26 | 北京百度网讯科技有限公司 | 用于高精地图的图像处理方法、装置、设备以及存储介质 |
US11845429B2 (en) * | 2021-09-30 | 2023-12-19 | GM Global Technology Operations LLC | Localizing and updating a map using interpolated lane edge data |
CN114454888B (zh) * | 2022-02-22 | 2023-10-13 | 福思(杭州)智能科技有限公司 | 一种车道线预测方法、装置、电子设备及车辆 |
CN114663529B (zh) * | 2022-03-22 | 2023-08-01 | 阿波罗智能技术(北京)有限公司 | 一种外参确定方法、装置、电子设备及存储介质 |
CN114475641B (zh) * | 2022-04-15 | 2022-06-28 | 天津所托瑞安汽车科技有限公司 | 车道偏离预警方法、装置、控制装置及存储介质 |
CN115082888B (zh) * | 2022-08-18 | 2022-10-25 | 北京轻舟智航智能技术有限公司 | 一种车道线检测方法和装置 |
CN115235500B (zh) * | 2022-09-15 | 2023-04-14 | 北京智行者科技股份有限公司 | 基于车道线约束的位姿校正方法及装置、全工况静态环境建模方法及装置 |
CN116682087B (zh) * | 2023-07-28 | 2023-10-31 | 安徽中科星驰自动驾驶技术有限公司 | 基于空间池化网络车道检测的自适应辅助驾驶方法 |
CN117437792B (zh) * | 2023-12-20 | 2024-04-09 | 中交第一公路勘察设计研究院有限公司 | 基于边缘计算的实时道路交通状态监测方法、设备及系统 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100002911A1 (en) * | 2008-07-06 | 2010-01-07 | Jui-Hung Wu | Method for detecting lane departure and apparatus thereof |
CN101894271A (zh) * | 2010-07-28 | 2010-11-24 | 重庆大学 | 汽车偏离车道线角度和距离的视觉计算及预警方法 |
CN101915672A (zh) * | 2010-08-24 | 2010-12-15 | 清华大学 | 车道偏离报警系统的测试装置及测试方法 |
CN101966838A (zh) * | 2010-09-10 | 2011-02-09 | 奇瑞汽车股份有限公司 | 一种车道偏离警示系统 |
CN105320927A (zh) * | 2015-03-25 | 2016-02-10 | 中科院微电子研究所昆山分所 | 车道线检测方法及系统 |
CN107169468A (zh) * | 2017-05-31 | 2017-09-15 | 北京京东尚科信息技术有限公司 | 用于控制车辆的方法和装置 |
CN108875603A (zh) * | 2018-05-31 | 2018-11-23 | 上海商汤智能科技有限公司 | 基于车道线的智能驾驶控制方法和装置、电子设备 |
Family Cites Families (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09270098A (ja) * | 1996-04-02 | 1997-10-14 | Mitsubishi Motors Corp | 車線逸脱警報装置 |
JP2003104147A (ja) * | 2001-09-27 | 2003-04-09 | Mazda Motor Corp | 車両の逸脱警報装置 |
JP5745220B2 (ja) * | 2006-06-11 | 2015-07-08 | ボルボ テクノロジー コーポレイション | 自動化車線維持システムを用いて車両側方間隔を維持する方法および装置 |
JP2010191893A (ja) * | 2009-02-20 | 2010-09-02 | Nissan Motor Co Ltd | 運転不全状態検出装置及び運転不全状態検出方法 |
JP5389864B2 (ja) * | 2011-06-17 | 2014-01-15 | クラリオン株式会社 | 車線逸脱警報装置 |
EP2629243A1 (de) * | 2012-02-15 | 2013-08-21 | Delphi Technologies, Inc. | Verfahren zum Erkennen und Verfolgen von Fahrspurmarkierungen |
JP5926080B2 (ja) | 2012-03-19 | 2016-05-25 | 株式会社日本自動車部品総合研究所 | 走行区画線認識装置およびプログラム |
WO2013186903A1 (ja) * | 2012-06-14 | 2013-12-19 | トヨタ自動車株式会社 | 車線区分標示検出装置、運転支援システム |
DE112013004267T5 (de) * | 2012-08-30 | 2015-06-25 | Honda Motor Co., Ltd. | Fahrbahnmarkierungserkennungsvorrichtung |
CN103832433B (zh) * | 2012-11-21 | 2016-08-10 | 中国科学院沈阳计算技术研究所有限公司 | 车道偏离及前车防碰撞报警系统及其实现方法 |
WO2015083009A1 (en) * | 2013-12-04 | 2015-06-11 | Mobileye Vision Technologies Ltd. | Systems and methods for mimicking a leading vehicle |
US9988047B2 (en) * | 2013-12-12 | 2018-06-05 | Magna Electronics Inc. | Vehicle control system with traffic driving control |
EP3292024A4 (en) * | 2015-05-06 | 2018-06-20 | Magna Mirrors of America, Inc. | Vehicle vision system with blind zone display and alert system |
WO2016183074A1 (en) * | 2015-05-10 | 2016-11-17 | Mobileye Vision Technologies Ltd. | Road profile along a predicted path |
US9916522B2 (en) | 2016-03-11 | 2018-03-13 | Kabushiki Kaisha Toshiba | Training constrained deconvolutional networks for road scene semantic segmentation |
US10049279B2 (en) * | 2016-03-11 | 2018-08-14 | Qualcomm Incorporated | Recurrent networks with motion-based attention for video understanding |
JP6672076B2 (ja) * | 2016-05-27 | 2020-03-25 | 株式会社東芝 | 情報処理装置及び移動体装置 |
JP6310503B2 (ja) * | 2016-06-06 | 2018-04-11 | 本田技研工業株式会社 | 車両及びレーン変更タイミング判定方法 |
US10859395B2 (en) * | 2016-12-30 | 2020-12-08 | DeepMap Inc. | Lane line creation for high definition maps for autonomous vehicles |
US11493918B2 (en) * | 2017-02-10 | 2022-11-08 | Magna Electronics Inc. | Vehicle driving assist system with driver attentiveness assessment |
CN106919915B (zh) * | 2017-02-22 | 2020-06-12 | 武汉极目智能技术有限公司 | 基于adas系统的地图道路标记及道路质量采集装置及方法 |
DE112019000070T5 (de) * | 2018-01-07 | 2020-03-12 | Nvidia Corporation | Führen von fahrzeugen durch fahrzeugmanöver unter verwendung von modellen für maschinelles lernen |
WO2019168869A1 (en) * | 2018-02-27 | 2019-09-06 | Nvidia Corporation | Real-time detection of lanes and boundaries by autonomous vehicles |
JP7008617B2 (ja) * | 2018-12-21 | 2022-01-25 | 本田技研工業株式会社 | 車両制御装置 |
-
2018
- 2018-05-31 CN CN201810551908.XA patent/CN108875603B/zh active Active
-
2019
- 2019-05-20 WO PCT/CN2019/087622 patent/WO2019228211A1/zh active Application Filing
- 2019-05-20 JP JP2020554361A patent/JP7024115B2/ja active Active
- 2019-05-20 SG SG11202005094XA patent/SG11202005094XA/en unknown
-
2020
- 2020-05-28 US US16/886,163 patent/US11314973B2/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100002911A1 (en) * | 2008-07-06 | 2010-01-07 | Jui-Hung Wu | Method for detecting lane departure and apparatus thereof |
CN101894271A (zh) * | 2010-07-28 | 2010-11-24 | 重庆大学 | 汽车偏离车道线角度和距离的视觉计算及预警方法 |
CN101915672A (zh) * | 2010-08-24 | 2010-12-15 | 清华大学 | 车道偏离报警系统的测试装置及测试方法 |
CN101966838A (zh) * | 2010-09-10 | 2011-02-09 | 奇瑞汽车股份有限公司 | 一种车道偏离警示系统 |
CN105320927A (zh) * | 2015-03-25 | 2016-02-10 | 中科院微电子研究所昆山分所 | 车道线检测方法及系统 |
CN107169468A (zh) * | 2017-05-31 | 2017-09-15 | 北京京东尚科信息技术有限公司 | 用于控制车辆的方法和装置 |
CN108875603A (zh) * | 2018-05-31 | 2018-11-23 | 上海商汤智能科技有限公司 | 基于车道线的智能驾驶控制方法和装置、电子设备 |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111160370A (zh) * | 2019-12-27 | 2020-05-15 | 深圳佑驾创新科技有限公司 | 车头位置估计方法、装置、计算机设备和存储介质 |
CN111160370B (zh) * | 2019-12-27 | 2024-02-27 | 佑驾创新(北京)技术有限公司 | 车头位置估计方法、装置、计算机设备和存储介质 |
CN111721316A (zh) * | 2020-06-22 | 2020-09-29 | 重庆大学 | 一种高性能的车道线识别感兴趣区域预测方法 |
EP4202759A4 (en) * | 2020-09-09 | 2023-10-25 | Huawei Technologies Co., Ltd. | TRAFFIC LANE LINE DETECTION METHOD, ASSOCIATED DEVICE AND COMPUTER READABLE STORAGE MEDIUM |
CN112287842A (zh) * | 2020-10-29 | 2021-01-29 | 恒大新能源汽车投资控股集团有限公司 | 一种车道线的识别方法、装置及电子设备 |
US11318958B2 (en) * | 2020-11-30 | 2022-05-03 | Beijing Baidu Netcom Science Technology Co., Ltd. | Vehicle driving control method, apparatus, vehicle, electronic device and storage medium |
CN112364822B (zh) * | 2020-11-30 | 2022-08-19 | 重庆电子工程职业学院 | 一种自动驾驶视频语义分割系统及方法 |
CN112364822A (zh) * | 2020-11-30 | 2021-02-12 | 重庆电子工程职业学院 | 一种自动驾驶视频语义分割系统及方法 |
CN112906665A (zh) * | 2021-04-06 | 2021-06-04 | 北京车和家信息技术有限公司 | 交通标线融合方法、装置、存储介质及电子设备 |
CN113657265A (zh) * | 2021-08-16 | 2021-11-16 | 长安大学 | 一种车辆距离探测方法、系统、设备及介质 |
CN113657265B (zh) * | 2021-08-16 | 2023-10-10 | 长安大学 | 一种车辆距离探测方法、系统、设备及介质 |
CN114743178A (zh) * | 2021-12-29 | 2022-07-12 | 北京百度网讯科技有限公司 | 道路边缘线生成方法、装置、设备及存储介质 |
CN114743178B (zh) * | 2021-12-29 | 2024-03-08 | 北京百度网讯科技有限公司 | 道路边缘线生成方法、装置、设备及存储介质 |
CN114565681A (zh) * | 2022-03-01 | 2022-05-31 | 禾多科技(北京)有限公司 | 一种相机标定方法、装置、设备、介质及产品 |
CN114565681B (zh) * | 2022-03-01 | 2022-11-22 | 禾多科技(北京)有限公司 | 一种相机标定方法、装置、设备、介质及产品 |
Also Published As
Publication number | Publication date |
---|---|
JP7024115B2 (ja) | 2022-02-22 |
CN108875603A (zh) | 2018-11-23 |
SG11202005094XA (en) | 2020-06-29 |
CN108875603B (zh) | 2021-06-04 |
US11314973B2 (en) | 2022-04-26 |
US20200293797A1 (en) | 2020-09-17 |
JP2021508901A (ja) | 2021-03-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019228211A1 (zh) | 基于车道线的智能驾驶控制方法和装置、电子设备 | |
JP7106664B2 (ja) | 知的運転制御方法および装置、電子機器、プログラムならびに媒体 | |
US11643076B2 (en) | Forward collision control method and apparatus, electronic device, program, and medium | |
US11840239B2 (en) | Multiple exposure event determination | |
US9965719B2 (en) | Subcategory-aware convolutional neural networks for object detection | |
US10984266B2 (en) | Vehicle lamp detection methods and apparatuses, methods and apparatuses for implementing intelligent driving, media and devices | |
WO2019114036A1 (zh) | 人脸检测方法及装置、计算机装置和计算机可读存储介质 | |
US20210117704A1 (en) | Obstacle detection method, intelligent driving control method, electronic device, and non-transitory computer-readable storage medium | |
Haque et al. | A computer vision based lane detection approach | |
Romdhane et al. | An improved traffic signs recognition and tracking method for driver assistance system | |
CN110781768A (zh) | 目标对象检测方法和装置、电子设备和介质 | |
JP2021530048A (ja) | 多階層化目標類別方法及び装置、交通標識検出方法及び装置、機器並びに媒体 | |
Mu et al. | Multiscale edge fusion for vehicle detection based on difference of Gaussian | |
Saleh et al. | Traffic signs recognition and distance estimation using a monocular camera | |
Muthalagu et al. | Vehicle lane markings segmentation and keypoint determination using deep convolutional neural networks | |
Gabb et al. | High-performance on-road vehicle detection in monocular images | |
Lin et al. | Improved traffic sign recognition for in-car cameras | |
Virgilio G et al. | Vision-based blind spot warning system by deep neural networks | |
Liu et al. | Detection of geometric shape for traffic lane and mark | |
Wang et al. | G-NET: Accurate Lane Detection Model for Autonomous Vehicle | |
Manoharan et al. | Detection of unstructured roads from a single image for autonomous navigation applications | |
Pydipogu et al. | Robust lane detection and object tracking In relation to the intelligence transport system | |
TWI832270B (zh) | 路況檢測方法、電子設備及計算機可讀存儲媒體 | |
EP4224361A1 (en) | Lane line detection method and apparatus | |
Doshi et al. | ROI based real time straight lane line detection using Canny Edge Detector and masked bitwise operator |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19810672 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2020554361 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19810672 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19810672 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 26/03/2021) |