WO2012011713A2 - Système et procédé de reconnaissance de voie de circulation - Google Patents

Système et procédé de reconnaissance de voie de circulation Download PDF

Info

Publication number
WO2012011713A2
WO2012011713A2 PCT/KR2011/005290 KR2011005290W WO2012011713A2 WO 2012011713 A2 WO2012011713 A2 WO 2012011713A2 KR 2011005290 W KR2011005290 W KR 2011005290W WO 2012011713 A2 WO2012011713 A2 WO 2012011713A2
Authority
WO
WIPO (PCT)
Prior art keywords
lane
equation
point
candidate point
updated
Prior art date
Application number
PCT/KR2011/005290
Other languages
English (en)
Korean (ko)
Other versions
WO2012011713A3 (fr
Inventor
한영인
서용덕
송영기
최현철
오세영
Original Assignee
주식회사 이미지넥스트
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 이미지넥스트 filed Critical 주식회사 이미지넥스트
Publication of WO2012011713A2 publication Critical patent/WO2012011713A2/fr
Publication of WO2012011713A3 publication Critical patent/WO2012011713A3/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/04Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/06Road conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0043Signal treatments, identification of variables or parameters, parameter estimation or state estimation
    • B60W2050/0052Filtering, filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Definitions

  • the present invention relates to a lane recognition system and method, and more particularly, to a system and method for performing lane recognition using an image taken with a camera.
  • Conventional lane detection methods include a straight lane detection method using fast and simple edge extraction and Hough transform based on a mobile device.
  • the PC-based method includes detecting a lane by a straight line or hyperbola pair. These methods are fast in terms of speed due to a simple calculation process, but have weak disadvantages such as lane loss or vibration that may occur during driving.
  • PC-based lanes can be used to construct lanes with quadratic curves or B-splines using rather complex processes, such as random walks that are robust against lane breakage and Kalman filters that are robust against vibration and other noise.
  • Reliable detection methods use complex calculations on high-performance PCs above 3.0 GHz to provide robust performance for lane breaks, obstructions, and traffic signs on the road.
  • the lane color recognition method since the lane color changes in various ways according to lighting conditions, there is a method of learning a recognizer by sampling a lane color under various conditions and collecting enormous data. However, this method does not sufficiently reflect the nonlinearity of the color change. Therefore, there is a need for a method of accurately recognizing lane colors even if there is a color change by sufficiently reflecting the color change due to illumination.
  • a method of calculating the horizontal distance between the lane and the vehicle, and also calculating the estimated time to lane crossing (TLC) by the vehicle, and comparing the threshold with an alarm is commonly used.
  • TLC estimated time to lane crossing
  • an object of the present invention is to provide a lane recognition system and method for recognizing a lane equation, lane type, lane color, lane departure situation, etc. using an image captured by a camera.
  • a lane recognizing method including preprocessing a vehicle front image obtained by a camera module mounted on a vehicle into an integrated image. Extracting lane candidate points by performing a convolution of a lane candidate point template, and converting the coordinates of the extracted lane candidate points into coordinates in an actual distance coordinate system to obtain a first lane equation of a lane candidate cluster obtained by clustering Selecting a lane candidate cluster pair that satisfies a predetermined condition among the obtained lane candidate clusters as an initially detected lane pair, and converting a lane equation in an actual distance coordinate system of the selected lane pair into a lane equation in an image coordinate system Converting.
  • the integrated image may be an integrated image of a G channel component of the vehicle front image.
  • a previous lane equation setting a region of interest within a predetermined range at a point located on the previous lane equation, and extracting a lane candidate point by performing a convolution of the integrated image and the lane candidate point template within the region of interest.
  • Selecting a valid lane candidate point by using a random sample concensus (RANSAC) algorithm for the extracted lane candidate point, and applying a Kalman filter using the valid lane candidate point as a measured value to perform the previous lane equation It may include the step of obtaining a lane equation that has been updated.
  • RANSAC random sample concensus
  • the lane candidate point located at the rightmost point in the ROI of the left lane is selected as a representative value, and the leftmost lane is located at the leftmost point in the ROI of the right lane.
  • an effective lane candidate point may be selected using the RANSAC algorithm.
  • the lane candidate point template has a step function in which '+1' and '-1' values have a same width (N / 2) with respect to coordinates (u, v), and the lane candidate point template And performing convolution on the integral image
  • I (u, v) is the brightness value of the integrated image at pixel (u, v)
  • conv_left is the convolution value of the left lane candidate point template and the integrated image
  • conv_right is the right lane candidate point template
  • Coordinates u and v which are convolution values of an integrated image and whose convolution values conv_left and conv_right are greater than or equal to a threshold value, may be extracted as the lane candidate points.
  • b R is the slope of the right lane equation in the updated lane equation
  • b L is the slope of the left lane equation in the updated lane equation
  • b1, b2 is a predetermined slope
  • Line L ' and Line R' are new left and right lane equations
  • Line L and Line R may be left and right lane equations in the updated lane equation.
  • the vanishing point of the updated lane equation is greater than the preset maximum value, it may be changed to the maximum value, and if it is smaller than the preset minimum value, it may be changed to the minimum value.
  • the updated lane equation if the difference between the maximum value and the minimum value is greater than or equal to the channel brightness of a predetermined pixel, the channel brightness of the pixel is scanned from the left and right around the sampled points at predetermined intervals, and the sampled point is determined as the point on the lane. If it is determined that the point on the lane among the sampled points is less than or equal to a predetermined criterion, the updated lane may be determined as a dotted line.
  • the lane color may be recognized by inputting a lane color average of an area centered on a point located in the updated lane equation and a road color average obtained for a road area having a predetermined size to a neural network.
  • an X coordinate of a point away from the vehicle by a predetermined distance in the vehicle front direction may be calculated to determine whether the vehicle leaves the lane.
  • the lane recognition system for solving the technical problem is a pre-processing image module for pre-processing the vehicle front image obtained from the camera module mounted on the vehicle into an integrated image, and if there is no previous lane equation, A lane candidate point is extracted by performing a convolution of an integrated image and a lane candidate point template, and the coordinates of the extracted lane candidate point are converted into coordinates in an actual distance coordinate system, and then the first lane equation of a lane candidate cluster obtained by clustering is obtained.
  • An initial lane detection module is included.
  • the region of interest is set within a predetermined range at a point located on the previous lane equation, and the lane candidate point is extracted by performing convolution of the integrated image and the lane candidate point template within the region of interest.
  • the method may further include a lane type recognition module configured to determine the updated lane as a dotted line when it is determined that the point on the lane among the sampled points is equal to or less than a predetermined criterion.
  • the apparatus further includes a lane color recognition module configured to recognize a lane color by inputting a lane color average of an area centered on a point located in the updated lane equation and a road color average obtained for a road area having a predetermined size to a neural network. can do.
  • the lane departure warning module may further include a lane departure warning module configured to calculate an X coordinate of a point away from the vehicle by a predetermined distance in the vehicle front direction (Y-axis direction) in the updated lane equation, and determine whether the vehicle leaves the lane. .
  • the present invention it is possible to robustly perform lane tracking, lane type, lane color, lane departure warning, etc. with a small amount of calculation even in a vehicle vibration, lighting change, lane change situation, and the like.
  • FIG. 1 is a block diagram provided to explain a lane recognition system according to an exemplary embodiment of the present invention.
  • FIG. 2 is a block diagram illustrating the lane recognition system of FIG. 1 in more detail.
  • FIG. 3 is a flowchart provided to explain an operation of the lane recognition system of FIG. 2.
  • FIG. 4 is a diagram illustrating an actual distance coordinate system and an image coordinate system used in a lane detection operation.
  • FIG. 5 is a flowchart illustrating the initial lane equation detection step of FIG. 3 in more detail.
  • FIG. 6 is a diagram provided to explain a method of extracting lane candidate points using a lane candidate point template.
  • FIG. 7 is a diagram provided to explain a lane equation in an image coordinate system.
  • FIG. 8 is a flowchart illustrating the lane equation tracking step of FIG. 3 in more detail.
  • 9 is a view provided to explain a lane type recognition operation.
  • FIG. 10 is a diagram provided to explain a method of obtaining a lane curvature.
  • FIG. 11 is a flowchart of a lane color recognition operation according to the present invention.
  • FIG. 12 is a diagram provided to explain a lane color recognition operation.
  • 13 is a view provided to explain a color recognition process using a neural network.
  • FIG. 1 is a block diagram provided to explain a lane recognition system according to an exemplary embodiment of the present invention.
  • a lane recognition system 200 receives an image obtained from a camera module 100 installed in a vehicle (not shown), and based on this, various information about lanes of a lane in which a vehicle is currently located. It performs the function of recognizing it.
  • the information on the lane may include a lane equation, a lane type (solid line or not), lane curvature, lane color, lane departure, and the like.
  • the camera module 100 is installed in front of the vehicle and performs a function of acquiring a color image of the road in front of the vehicle.
  • the camera module 100 transmits the acquired image to the lane recognition system 200 in real time.
  • the camera module 100 may include a lens having a large angle of view, such as a wide angle lens or a fisheye lens, and includes a pinhole camera.
  • the camera module 100 may acquire a 3D object as a 2D image through a lens having a wide angle of view of about 60 ° to about 120 °.
  • FIG. 2 is a block diagram illustrating the lane recognition system of FIG. 1 in more detail
  • FIG. 3 is a flowchart provided to explain an operation of the lane recognition system of FIG. 2.
  • the lane recognition system 200 includes an image preprocessing module 210, an initial lane detection module 220, a lane tracking module 230, a lane type recognition module 240, and a lane color recognition module 250.
  • the lane departure warning module 260 may include a control module 270 and a storage module 280.
  • the image preprocessing module 210 may receive a vehicle front image from the camera module 100, calculate an integral image thereof, and output the calculated image.
  • the initial lane detection module 220 may detect the initial lane equation using an integrated image input from the image preprocessing module 210.
  • the lane tracking module 230 may obtain a new lane equation using the previous lane equation if there is one.
  • the lane type recognition module 240 may perform a function of recognizing whether the lane is a solid line or a dotted line.
  • the lane color recognition module 250 performs a function of recognizing whether the lane color is white, yellow, or blue. Through this, information about whether the lane is a center line or a bus-only lane can be obtained.
  • the lane departure warning module 260 determines a lane departure situation of the vehicle and performs a warning function.
  • the control module 270 controls the overall operation of the lane recognition system 200.
  • the storage module 280 stores various kinds of information and data necessary for the operation of the lane recognizing system 200 and performs a function of providing the information according to a request of each component.
  • FIG. 3 is a flowchart provided to explain the operation of the lane recognition system according to the present invention.
  • the image preprocessing module 210 receives an image of the vehicle front from the camera module 100 (S310).
  • the image preprocessing module 210 selects a G channel component from the input color components of the vehicle front image and calculates and outputs an integrated image thereof (S320).
  • the reason for selecting the G channel component is that the division of the lane portion and the road portion is clearer than other channel components even in a tunnel or at night. Therefore, it is also possible to perform the operation described below by selecting the R, B channel image, but it is preferable to use the G channel image.
  • the image preprocessing module 210 may additionally perform a function of correcting a distortion or an error in an image input from the camera module 100.
  • control module 270 determines whether there is a previous lane equation (S330), and controls the initial lane detection module 220 accordingly to detect the initial lane equation.
  • the initial lane detection module 220 detects the initial lane equation using the integrated image of the G channel calculated by the image preprocessing module 210 (S340). .
  • An initial lane equation detection operation in step S340 will be described in more detail with reference to FIGS. 4 and 5.
  • FIG. 4 is a diagram illustrating an actual distance coordinate system and an image coordinate system used in a lane detection operation
  • FIG. 5 is a flowchart illustrating the initial lane equation detection step of FIG. 3 in more detail
  • FIG. 6 is a lane using a lane candidate point template. A diagram provided for explaining a method of extracting candidate points.
  • FIG. 4 shows an actual distance coordinate system starting from the center of the front of the vehicle, and (b) shows an image coordinate system used in an image captured in front of the vehicle.
  • the initial lane detection module 220 calculates a lane detection target region for an integrated image input by the image preprocessing module 210 (S3410).
  • the lane detection target area may be obtained as an image area corresponding to a rectangular area of 3m to 30m in front and -4m to 4m in left and right from the front center of the vehicle (eg, the center of the vehicle hood tip).
  • an image is generated by using Equation 1 below for (-4, 3), (-4, 30), (4, 3), and (4, 30) corresponding to each corner point of the rectangular area in the actual distance coordinate system.
  • the lane detection target area may be obtained.
  • (X, Y) is the coordinate in the actual distance coordinate system
  • (u, v) is the coordinate in the image coordinate system
  • u ', v' is the u, v coordinate in the image coordinate system before the perspective projection is applied
  • s is Perspective Transform (PT) represents a perspective projection ratio, and is a matrix for transforming coordinates (X, Y) into coordinates (u ', v') of an image coordinate system in an actual distance coordinate system. It is generally known to find the components of the matrix PT, and thus detailed description thereof will be omitted.
  • the initial lane detection module 220 extracts a lane candidate point in the lane detection target region (S3420).
  • candidate lanes having a step function in which '+1' and '-1' values have the same width (N / 2) for the coordinates u 0 and v 0 are consecutive.
  • Lane candidate points are extracted using the templates 610 and 620. If the value of the convolution of the integrated image and the lane candidate point templates 610 and 620 in the lane detection target area 630 is greater than or equal to a predetermined threshold, the corresponding coordinates may be extracted as the lane candidate points u 0 and v 0 . have.
  • the step function representing the lane candidate point template may be expressed as in Equation 2 below.
  • f L (u) represents the left lane candidate point template 610
  • f R (u) represents the right lane candidate point template 620.
  • the initial lane detection module 220 calculates the width N of the lane candidate point templates 610 and 620.
  • the width N of the lane candidate point templates 610 and 620 may be set to correspond to the width of the lane painted on the road. In general, since the width of one lane painted on the road is 20 cm, the following equation (3) can be used to obtain the width (N).
  • X 'and Y' are coordinates in the actual distance coordinate system
  • s is a perspective projection ratio
  • (u, v) is a coordinate in the image coordinate system
  • u ' is a u coordinate in the image coordinate system before the perspective projection is applied
  • V ' is the v coordinate in the image coordinate system before the perspective projection is applied
  • the matrix (IPT: Inverse Perspective Transform) is the coordinate in the image coordinate system (u, v) in the actual distance coordinate system (X', Y ')
  • the matrix to convert to. Obtaining the components of the matrix IPT is generally known, and thus detailed description thereof will be omitted.
  • the lane width in the image becomes narrower as it moves away from the vehicle, it can be calculated as a function of the value of v.
  • the coordinates (X, Y) in the corresponding actual distance coordinate system are obtained with respect to the coordinates (u, v), and the coordinates (X + 0.2, Y) moved 20 cm from the coordinates (X, Y) to the X axis are obtained.
  • the width of the coordinates (u, v) and coordinates (u2, v2) in the u axis (u2-u) (N) can be obtained.
  • the initial lane detection module 220 performs convolution of the lane candidate point templates 610 and 620 with the vehicle front image to extract the lane candidate points. do. Convolution of the lane candidate point templates 610 and 620 and the vehicle front image may be performed by using Equation 4 below.
  • I (u, v) is a brightness value of an integrated image in pixels (u, v)
  • conv_left is a convolution value of a left lane candidate point template 610 and an integrated image
  • conv_right is a right lane candidate point template 610.
  • the convolution value of the integral image Both convolution values represent the difference in average brightness between the road and lane sections. Therefore, we select the coordinates (u, v) where both values are above the threshold as the lane candidate points. Meanwhile, a large number of lane candidate points satisfying the threshold or more are detected in pixels adjacent to each other. Among the candidate candidates in close proximity, the candidate point having the highest threshold value is selected as the representative value.
  • the reference for the close distance may use the width N of the lane in the horizontal direction.
  • the lane candidate points in the vehicle detection area may be removed from the lane candidate point because they may be wrong candidates hidden by the vehicle.
  • a method for detecting the vehicle area in the vehicle front image a method already known to those skilled in the art may be used, and various other methods may be used.
  • the initial lane detection module 220 clusters the lane candidate points obtained in step S3423 (S3430). Clustering may be performed based on the distance between lane candidate points in the image coordinate system. When the distance between lane candidate points is equal to or less than a predetermined criterion (eg, 6 pixels), the same cluster is defined. On the other hand, the number of candidate points belonging to the same cluster is less than a predetermined number (for example, 5) can be considered as noise and excluded. As a result of clustering lane candidate points, lane candidate clusters for markers similar to lanes or other lanes can be obtained.
  • a predetermined criterion eg, 6 pixels
  • the initial lane detection module 220 calculates a lane equation with respect to the extracted lane candidate cluster (S3440).
  • a cluster pair that satisfies the next lane pair satisfaction requirement is selected as the initially detected lane pair from the lane equation calculated for each lane candidate cluster (S3450). Meanwhile, some of the following five lane pair satisfaction requirements may be omitted or additionally added according to embodiments.
  • Condition 4 When there are several lane pairs satisfying conditions 1) to 3), select the pair with the smallest distance between lanes under condition 2).
  • step S3450 the control module 270 considers that there is no lane in the current input image, and controls the initial lane detection operation to be performed on the next input image.
  • the initial lane detection module 220 converts the lane equation in the actual distance coordinate system to the lane equation in the image coordinate system (S3460).
  • the reason for the conversion to the lane equation in the image coordinate system is as follows. If there is previous lane information, the lane tracking module 230 is used to track the lane, since the lane tracking is performed in the image coordinate system for computational efficiency. In the lane equation transformation between the actual distance coordinate and the image coordinate system, the equation itself cannot be transformed because the transformation between the two coordinate systems is nonlinear, and the coefficients of the equation are again found by converting the sampled points on the lane into the image coordinate system. Can be done.
  • the lane equation in the image coordinate system is a hyperbolar equation of the v coordinate with respect to the u coordinate of the image, as shown in FIG.
  • This equation corresponds to a quadratic equation in the real coordinate system, the coefficient k representing the curvature of the lane, the asymptotic slope b L in the left lane, the asymptotic slope b K in the right lane, and the coordinates of the vanishing point where the two asymptotes meet (c, h )
  • the points on the lane are sampled at intervals of 1m from 3m to 30m for the first-lane equations obtained from the actual distance coordinate system, and each point is converted into image coordinates using a matrix (PT).
  • h value which is the v coordinate of the vanishing point, is obtained as a corresponding value when Y is infinite in actual coordinates using Equation 6 below using a matrix (PT).
  • control module 260 controls the lane tracking module 230 to track the new lane equation based on the previous lane equation (S350).
  • FIG. 8 is a flowchart illustrating the lane equation tracking step of FIG. 3 in more detail.
  • the lane tracking module 230 sets a region of interest at a previous lane position (S3510), and extracts lane candidate points within the region of interest (S3510). S3520).
  • the region of interest may be set according to a predetermined criterion. For example, it may be set within a range of 60 cm from the left and right at the point located on the previous lane equation, and may be appropriately set by the user in consideration of arithmetic speed and accuracy.
  • the lane candidate point extraction method in the ROI may use the lane candidate point template as in the initial lane extraction.
  • the lane candidate point located on the right side may be selected as the representative value among the lane candidate points, and the lane candidate point located on the left side in the ROI of the right lane may be selected. This allows you to select a candidate point on the original lane when a lane splits into two, instead of selecting a candidate point on the new lane that splits, so that even if there are two split lanes within the area of interest
  • follow-on tracking is possible.
  • the lane tracking module 230 selects a valid lane candidate point using a random sample concensus (RANSAC) algorithm on the lane candidate point extracted in step S3510 (S3530).
  • RANSAC random sample concensus
  • the RANSAC algorithm effectively removes outliers from model fitting.
  • the RANSAC algorithm uses a hyperbolar equation as a fitting model for each extracted left and right lane candidate points. Choose only inliers.
  • the inlier threshold used in the RANSAC algorithm may use a width of 20 cm (which should be converted into a distance in image coordinates using a matrix PT) of one lane, and the number of repetitions may be preset. have. For example, the number of repetitions can be set to 100 considering the system load.
  • the following is a process of selecting an inliner corresponding to a valid lane candidate point using the RANSAC algorithm.
  • Step 2 The left lane equation coefficient (k, b L , c) having the least squares error is obtained as the selected lane candidate point.
  • Step 3 Inliers are selected points whose distance difference of u coordinate is less than 20cm from the calculated lane equation.
  • Step 4) If the number of inliers is larger than the previous result, the inliner is stored and the steps 1) to 3) are repeated. If the process 1) to 3) is repeated a predetermined number of times, or if the number of inliers is equal to the entire candidate points, the repetition is stopped and the selected inliers are selected as valid lane candidate points until then.
  • Step 5) Repeat steps 1) to 4) for the extracted right lane candidate points to select an effective lane candidate point.
  • the lane tracking module 230 updates the new lane equation by applying a Kalman filter using the valid lane candidate points selected through the RANSAC algorithm as a measurement (S3540). Meanwhile, if the number of valid lane candidate points selected in the left or right ROI is less than 10, the candidate points of the ROI are not included in the measurement. If the left or right ROI is less than 10 valid lane candidate points, the lane equation may be processed without performing an update process.
  • the number of valid lane candidate points for which the update process is not performed is set to 10 in this embodiment, but the present invention is not limited thereto. Since there is a relationship between the accuracy and the trade-off, an appropriate number is added or subtracted according to the required accuracy according to the embodiment. Can be adjusted with
  • EKF Extended Kalman Filtering
  • the lane tracking module 230 performs a lane change and a vanishing point change on the updated lane equation (S3550).
  • the lane change can be divided into two cases of changing to the left neighboring lane and changing to the right neighboring lane.
  • the updated lane pair refers to a lane pair represented by the lane equation updated in step S3530 by using a Kalman filter, and the new lane pair applies a lane change and a vanishing point change to finally detect the lane pair. it means.
  • the change to the left neighboring lane is defined as a case where the slope b R of the right lane in the updated lane pair is smaller than ⁇ 0.2.
  • + is a value obtained by converting a lane width (for example, 3.5m) into an image coordinate system in an actual distance coordinate system.
  • the change to the right neighboring lane is defined as a case where the slope b L of the left lane is greater than 0.2 in the updated lane pair.
  • the updated lane pair's left lane equation (Line L ) as the new lane pair's right lane equation (Line R ' ), and shift it width-wise in parallel to reset the left lane equation of the new lane pair. do. Equation 8 below shows such lane information change.
  • the updated vanishing point is larger than the preset maximum value, it is changed to the maximum value, and if it is smaller than the minimum value, the updated vanishing point is changed to the minimum value.
  • the reason for changing the vanishing point is to prevent the vanishing point from diverging during the lane tracking process.
  • the possible range of vanishing points is the maximum value (v 80 m ) And minimum value (v 30 m If set to), the updated vanishing point is the maximum value (v 80 m ) And minimum value (v 30 m ), The vanishing point is used as is, and the maximum value (v 80 m Greater than), the vanishing point is the maximum value (v 80 m ), And the minimum value (v 30 m Less than), the minimum value (v 30 m Can be changed to Where v 80 m , v 30 m Are the v coordinate values corresponding to the Y coordinates corresponding to 80 m and 30 m ahead of the vehicle in the actual distance coordinate system, respectively.
  • the lane type recognition module 240 may calculate a dotted line, a solid line, and a lane curvature of the lane (S360).
  • the dotted line and solid line recognition of the lane may be performed as follows.
  • the G channel intensity is scanned with 20 pixels on the left and right sides of the points sampled at predetermined intervals (for example, 1 m) on each lane, and the difference between the maximum value and the minimum value among them. Is a point above the lane, and a point below the threshold is regarded as a point of the road area where the lane is broken.
  • the ratio of the number of points on the lane is calculated with respect to the total number of points corresponding to 5m to 20m in front of the vehicle, and if the value is 0.9 or more, it may be determined as a solid line and 0.9 or less.
  • the dotted line and the solid line are determined based on the case where the number of points on the lane is 0.9 or more for the lanes corresponding to 5m to 20m in front of the vehicle. Can be set differently.
  • the radius of curvature of the lane can calculate the curvature for the lane between 15 m and 30 m of the detected lane.
  • FIG. 10 is a diagram provided to explain a method of obtaining a lane curvature. Referring to FIG. 10, the radius of curvature may be first calculated using a matrix equation (IPT) and a lane equation in an image coordinate system at 15 m, 22.5 m, and 30 m points of a lane that is a curvature reference position.
  • IPT matrix equation
  • the center of curvature is the intersection of a straight line perpendicular to the line connecting the 15 m and 22.5 m lanes and a straight line perpendicular to the line connecting the 22.5 m and 30 m points, and between the 22.5 m and the center of curvature.
  • the distance of can be calculated as the radius of curvature.
  • the lane color recognition module 250 may recognize the color of the lane (S370).
  • 11 is a flowchart of a lane color recognition operation according to the present invention
  • FIG. 12 is a view provided to explain the lane color recognition operation.
  • a color average is obtained by sampling a point and a road area on a lane (S3710). For lanes between 5 m and 20 m ahead of the currently detected or tracked vehicle, select the appropriately sized rectangular area around the points sampled at predetermined intervals in the same way as the lane type recognition process. Find the average. As illustrated in FIG. 12, the road color average is obtained using a road area having a predetermined size (for example, 20 ⁇ 20 pixels) centered on a point 10 pixels away from the lane with respect to a 5 m area in front of the vehicle.
  • the lane colors and road colors are normalized and combined to become neural network input. The normalization method is shown in Equation 9 below.
  • (R L , B L , G L ) is the color value of the point on the lane
  • (R R , B R , G R ) is the average color value of the road area.
  • the neural network has a structure of '4-5-3' of the input layer-hidden layer-output layer as shown in FIG.
  • the input of the neural network is a four-dimensional vector, which is composed of normalized B and G channel values for lane colors and B and G channel values for road colors corresponding to lane colors. Adding road colors instead of using only lane colors as inputs is to allow the neural network to learn nonlinear color conversions for various lighting.
  • the output of the neural network is a three-dimensional bipolar vector, which outputs [1 -1 -1] for white lanes, [-1 1 -1] for yellow lanes, and [-1 -1 1] for blue lanes. do.
  • the 3D bipolar vector output for each lane of each color may be set to different values.
  • Lane color and road color learning data may be configured by sampling road color and lane color in daylight, tunnel, backlight, and blue white balance situations as shown in FIG. 12. For road color, the road area near the lane is extracted as a square of appropriate size, and the average value of each channel of RGB is obtained. This value represents a lighting state of the current image.
  • the lane color is extracted as a rectangle of a suitable size on the lane.
  • the input consists of the normalized B and G channel values of each pixel of the lane color and the normalized B and G channels of the road color, and the target output is (1 -1 -1) and (- 1 1 -1) and (-1 -1 1).
  • the learning can be learned by the Levenberg-Marquart backpropagation method.
  • the process of recognizing the lane color with the learned neural network can obtain outputs for the calculated neural network inputs, and output the most obtained color as the recognized color.
  • the lane departure warning module 260 may determine the lane departure of the vehicle by calculating the X coordinate of a point away from the vehicle in a direction in front of the vehicle (Y-axis direction) in the updated lane equation. For example, the lane departure warning module 260 determines the lane departure by calculating the X coordinate of the lane at 3 m in front of the vehicle from the newly obtained lane equation, and performs a warning operation (S380). The lane departure determination may be performed as follows. If the calculated X coordinate of the left lane is larger than -50cm at 3m ahead, it can be regarded as a left lane departure situation.
  • the lane departure warning module 260 may warn the lane departure situation through an output means (not shown) such as a speaker or a warning light.
  • Embodiments of the invention include a computer readable medium containing program instructions for performing various computer-implemented operations.
  • This medium records a program for executing the lane detection method described above.
  • the media may include, alone or in combination with the program instructions, data files, data structures, and the like. Examples of such media include magnetic media such as hard disks, floppy disks and magnetic tape, optical recording media such as CDs and DVDs, floppy disks and program commands such as magnetic-optical media, ROM, RAM, flash memory, and the like.
  • Hardware devices configured to store and perform such operations.
  • the medium may be a transmission medium such as an optical or metal wire, a waveguide, or the like including a carrier wave for transmitting a signal specifying a program command, a data structure, and the like.
  • program instructions include not only machine code generated by a compiler, but also high-level language code that can be executed by a computer using an interpreter or the like.
  • the present invention can be used in a system and method for performing lane detection using an image captured by a camera.

Abstract

La présente invention concerne un procédé et un système de reconnaissance de voie de circulation qui comportent les étapes suivantes : le traitement préalable d'une image avant de véhicule obtenue par un module de caméra attaché à un véhicule afin d'obtenir une image intégrale ; l'extraction d'un point de voie de circulation candidate par convolution de l'image intégrale et d'un modèle de point de voie de circulation candidate lorsqu'une équation de voie de circulation précédente n'existe pas ; la découverte d'une équation de voie de circulation primaire d'un groupe de voies de circulation candidates, qui est obtenu par conversion des coordonnées des points de voie de circulation candidate extraits en coordonnées d'un système de coordonnées à distance vraie et ensuite par regroupement, et la sélection, parmi les groupes potentiels de voie de circulation trouvés, une paire de groupes de voie de circulation candidate qui satisfait les conditions prédéterminées à titre de paire de voies de circulation initialement détectées.
PCT/KR2011/005290 2010-07-19 2011-07-19 Système et procédé de reconnaissance de voie de circulation WO2012011713A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2010-0069537 2010-07-19
KR1020100069537A KR101225626B1 (ko) 2010-07-19 2010-07-19 차선 인식 시스템 및 방법

Publications (2)

Publication Number Publication Date
WO2012011713A2 true WO2012011713A2 (fr) 2012-01-26
WO2012011713A3 WO2012011713A3 (fr) 2012-05-10

Family

ID=45497281

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2011/005290 WO2012011713A2 (fr) 2010-07-19 2011-07-19 Système et procédé de reconnaissance de voie de circulation

Country Status (2)

Country Link
KR (1) KR101225626B1 (fr)
WO (1) WO2012011713A2 (fr)

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103057470A (zh) * 2012-12-06 2013-04-24 重庆交通大学 一种车辆违章压线预先提示装置及其提示方法
CN103150337A (zh) * 2013-02-04 2013-06-12 北京航空航天大学 一种基于Bézier曲线的车道线重构方法
CN104670085A (zh) * 2013-11-29 2015-06-03 现代摩比斯株式会社 车道脱离警报系统
WO2015105239A1 (fr) * 2014-01-13 2015-07-16 삼성테크윈 주식회사 Système et procédé de détection de positions de véhicules et de voise
EP2882187A4 (fr) * 2012-08-03 2016-02-24 Clarion Co Ltd Dispositif d'imagerie dans un véhicule
CN106778605A (zh) * 2016-12-14 2017-05-31 武汉大学 导航数据辅助下的遥感影像道路网自动提取方法
US9704063B2 (en) 2014-01-14 2017-07-11 Hanwha Techwin Co., Ltd. Method of sampling feature points, image matching method using the same, and image matching apparatus
KR20170085752A (ko) 2016-01-15 2017-07-25 현대자동차주식회사 자율 주행 차량의 차선 인지 방법 및 장치
CN109084782A (zh) * 2017-06-13 2018-12-25 蔚来汽车有限公司 基于摄像头传感器的车道线地图构建方法以及构建系统
CN110211371A (zh) * 2018-02-28 2019-09-06 罗伯特·博世有限公司 用于求取道路交叉口的拓扑信息的方法
CN110222658A (zh) * 2019-06-11 2019-09-10 腾讯科技(深圳)有限公司 道路灭点位置的获取方法及装置
CN110879961A (zh) * 2018-09-05 2020-03-13 斯特拉德视觉公司 利用车道模型的车道检测方法和装置
CN110909588A (zh) * 2018-09-15 2020-03-24 斯特拉德视觉公司 基于cnn的用于车道线检测的方法和装置
CN111008600A (zh) * 2019-12-06 2020-04-14 中国科学技术大学 一种车道线检测方法
CN111507154A (zh) * 2019-01-31 2020-08-07 斯特拉德视觉公司 使用横向滤波器掩膜来检测车道线元素的方法和装置
CN112285734A (zh) * 2020-10-30 2021-01-29 北京斯年智驾科技有限公司 基于道钉的港口无人集卡高精度对准方法及其对准系统
CN112597846A (zh) * 2020-12-14 2021-04-02 合肥英睿系统技术有限公司 车道线检测方法、装置、计算机设备和存储介质
CN112818804A (zh) * 2021-01-26 2021-05-18 重庆长安汽车股份有限公司 目标级车道线的平行处理方法、系统、车辆及存储介质
CN113239733A (zh) * 2021-04-14 2021-08-10 重庆利龙科技产业(集团)有限公司 一种多车道车道线检测方法
US11170299B2 (en) 2018-12-28 2021-11-09 Nvidia Corporation Distance estimation to objects and free-space boundaries in autonomous machine applications
CN114120258A (zh) * 2022-01-26 2022-03-01 深圳佑驾创新科技有限公司 一种车道线识别方法、装置及存储介质
US11308338B2 (en) 2018-12-28 2022-04-19 Nvidia Corporation Distance to obstacle detection in autonomous machine applications
CN114889606A (zh) * 2022-04-28 2022-08-12 吉林大学 一种基于多传感融合的低成本高精定位方法
US11436484B2 (en) 2018-03-27 2022-09-06 Nvidia Corporation Training, testing, and verifying autonomous machines using simulated environments
WO2022204867A1 (fr) * 2021-03-29 2022-10-06 华为技术有限公司 Procédé et appareil de détection de ligne de voie
US11520345B2 (en) 2019-02-05 2022-12-06 Nvidia Corporation Path perception diversity and redundancy in autonomous machine applications
US11537139B2 (en) 2018-03-15 2022-12-27 Nvidia Corporation Determining drivable free-space for autonomous vehicles
US11604470B2 (en) 2018-02-02 2023-03-14 Nvidia Corporation Safety procedure analysis for obstacle avoidance in autonomous vehicles
US11604967B2 (en) 2018-03-21 2023-03-14 Nvidia Corporation Stereo depth estimation using deep neural networks
US11610115B2 (en) 2018-11-16 2023-03-21 Nvidia Corporation Learning to generate synthetic datasets for training neural networks
US11609572B2 (en) 2018-01-07 2023-03-21 Nvidia Corporation Guiding vehicles through vehicle maneuvers using machine learning models
US11648945B2 (en) 2019-03-11 2023-05-16 Nvidia Corporation Intersection detection and classification in autonomous machine applications
US11676364B2 (en) 2018-02-27 2023-06-13 Nvidia Corporation Real-time detection of lanes and boundaries by autonomous vehicles
US11698272B2 (en) 2019-08-31 2023-07-11 Nvidia Corporation Map creation and localization for autonomous driving applications
US11704890B2 (en) 2018-12-28 2023-07-18 Nvidia Corporation Distance to obstacle detection in autonomous machine applications
CN116630928A (zh) * 2023-07-25 2023-08-22 广汽埃安新能源汽车股份有限公司 一种车道线优化方法、装置及电子设备
US11966838B2 (en) 2018-06-19 2024-04-23 Nvidia Corporation Behavior-guided path planning in autonomous machine applications
US11978266B2 (en) 2020-10-21 2024-05-07 Nvidia Corporation Occupant attentiveness and cognitive load monitoring for autonomous and semi-autonomous driving applications

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101327348B1 (ko) * 2012-06-15 2013-11-11 재단법인 경북아이티융합 산업기술원 주행도로 영상에 정의된 마스크를 이용한 자기 차로 인식 시스템
KR101428165B1 (ko) 2012-06-29 2014-08-07 엘지이노텍 주식회사 차선이탈 경고 시스템 및 차선이탈 경고 방법
KR101382902B1 (ko) * 2012-06-29 2014-04-23 엘지이노텍 주식회사 차선이탈 경고 시스템 및 차선이탈 경고 방법
KR101338347B1 (ko) * 2013-04-02 2014-01-03 라온피플 주식회사 차선 이탈 경고 시스템 및 방법
KR101483742B1 (ko) 2013-06-21 2015-01-16 가천대학교 산학협력단 지능형 차량의 차선 검출방법
KR101878490B1 (ko) * 2017-03-10 2018-07-13 만도헬라일렉트로닉스(주) 차선 인식 시스템 및 방법
KR102499398B1 (ko) 2017-08-09 2023-02-13 삼성전자 주식회사 차선 검출 방법 및 장치
CN111133447B (zh) 2018-02-18 2024-03-19 辉达公司 适于自主驾驶的对象检测和检测置信度的方法和系统
KR102097869B1 (ko) * 2018-04-25 2020-04-06 연세대학교 산학협력단 자가 지도 학습을 이용한 딥러닝 기반 도로 영역 추정 장치 및 방법
US10262214B1 (en) * 2018-09-05 2019-04-16 StradVision, Inc. Learning method, learning device for detecting lane by using CNN and testing method, testing device using the same
KR102475042B1 (ko) * 2018-09-28 2022-12-06 현대오토에버 주식회사 정밀 지도 구축 장치 및 방법
CN111401251B (zh) * 2020-03-17 2023-12-26 北京百度网讯科技有限公司 车道线提取方法、装置、电子设备及计算机可读存储介质
CN112613344B (zh) * 2020-12-01 2024-04-16 浙江华锐捷技术有限公司 车辆占道检测方法、装置、计算机设备和可读存储介质
KR102629639B1 (ko) * 2021-04-21 2024-01-29 계명대학교 산학협력단 차량용 듀얼 카메라 장착 위치 결정 장치 및 방법

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009037541A (ja) * 2007-08-03 2009-02-19 Nissan Motor Co Ltd レーンマーカ認識装置及びその方法、並びに車線逸脱防止装置
JP2009237706A (ja) * 2008-03-26 2009-10-15 Honda Motor Co Ltd 車両用車線認識装置、車両、及び車両用車線認識プログラム
US20100002911A1 (en) * 2008-07-06 2010-01-07 Jui-Hung Wu Method for detecting lane departure and apparatus thereof
US20100110193A1 (en) * 2007-01-12 2010-05-06 Sachio Kobayashi Lane recognition device, vehicle, lane recognition method, and lane recognition program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100110193A1 (en) * 2007-01-12 2010-05-06 Sachio Kobayashi Lane recognition device, vehicle, lane recognition method, and lane recognition program
JP2009037541A (ja) * 2007-08-03 2009-02-19 Nissan Motor Co Ltd レーンマーカ認識装置及びその方法、並びに車線逸脱防止装置
JP2009237706A (ja) * 2008-03-26 2009-10-15 Honda Motor Co Ltd 車両用車線認識装置、車両、及び車両用車線認識プログラム
US20100002911A1 (en) * 2008-07-06 2010-01-07 Jui-Hung Wu Method for detecting lane departure and apparatus thereof

Cited By (60)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9723282B2 (en) 2012-08-03 2017-08-01 Clarion Co., Ltd. In-vehicle imaging device
EP2882187A4 (fr) * 2012-08-03 2016-02-24 Clarion Co Ltd Dispositif d'imagerie dans un véhicule
CN103057470B (zh) * 2012-12-06 2015-09-16 重庆交通大学 一种车辆违章压线预先提示装置及其提示方法
CN103057470A (zh) * 2012-12-06 2013-04-24 重庆交通大学 一种车辆违章压线预先提示装置及其提示方法
CN103150337A (zh) * 2013-02-04 2013-06-12 北京航空航天大学 一种基于Bézier曲线的车道线重构方法
CN104670085A (zh) * 2013-11-29 2015-06-03 现代摩比斯株式会社 车道脱离警报系统
WO2015105239A1 (fr) * 2014-01-13 2015-07-16 삼성테크윈 주식회사 Système et procédé de détection de positions de véhicules et de voise
US9704063B2 (en) 2014-01-14 2017-07-11 Hanwha Techwin Co., Ltd. Method of sampling feature points, image matching method using the same, and image matching apparatus
KR20170085752A (ko) 2016-01-15 2017-07-25 현대자동차주식회사 자율 주행 차량의 차선 인지 방법 및 장치
CN106778605A (zh) * 2016-12-14 2017-05-31 武汉大学 导航数据辅助下的遥感影像道路网自动提取方法
CN106778605B (zh) * 2016-12-14 2020-05-05 武汉大学 导航数据辅助下的遥感影像道路网自动提取方法
CN109084782A (zh) * 2017-06-13 2018-12-25 蔚来汽车有限公司 基于摄像头传感器的车道线地图构建方法以及构建系统
CN109084782B (zh) * 2017-06-13 2024-03-12 蔚来(安徽)控股有限公司 基于摄像头传感器的车道线地图构建方法以及构建系统
US11755025B2 (en) 2018-01-07 2023-09-12 Nvidia Corporation Guiding vehicles through vehicle maneuvers using machine learning models
US11609572B2 (en) 2018-01-07 2023-03-21 Nvidia Corporation Guiding vehicles through vehicle maneuvers using machine learning models
US11604470B2 (en) 2018-02-02 2023-03-14 Nvidia Corporation Safety procedure analysis for obstacle avoidance in autonomous vehicles
US11966228B2 (en) 2018-02-02 2024-04-23 Nvidia Corporation Safety procedure analysis for obstacle avoidance in autonomous vehicles
US11676364B2 (en) 2018-02-27 2023-06-13 Nvidia Corporation Real-time detection of lanes and boundaries by autonomous vehicles
CN110211371B (zh) * 2018-02-28 2022-10-11 罗伯特·博世有限公司 用于求取道路交叉口的拓扑信息的方法
CN110211371A (zh) * 2018-02-28 2019-09-06 罗伯特·博世有限公司 用于求取道路交叉口的拓扑信息的方法
US11537139B2 (en) 2018-03-15 2022-12-27 Nvidia Corporation Determining drivable free-space for autonomous vehicles
US11941873B2 (en) 2018-03-15 2024-03-26 Nvidia Corporation Determining drivable free-space for autonomous vehicles
US11604967B2 (en) 2018-03-21 2023-03-14 Nvidia Corporation Stereo depth estimation using deep neural networks
US11436484B2 (en) 2018-03-27 2022-09-06 Nvidia Corporation Training, testing, and verifying autonomous machines using simulated environments
US11966838B2 (en) 2018-06-19 2024-04-23 Nvidia Corporation Behavior-guided path planning in autonomous machine applications
CN110879961B (zh) * 2018-09-05 2023-10-17 斯特拉德视觉公司 利用车道模型的车道检测方法和装置
CN110879961A (zh) * 2018-09-05 2020-03-13 斯特拉德视觉公司 利用车道模型的车道检测方法和装置
CN110909588B (zh) * 2018-09-15 2023-08-22 斯特拉德视觉公司 基于cnn的用于车道线检测的方法和装置
CN110909588A (zh) * 2018-09-15 2020-03-24 斯特拉德视觉公司 基于cnn的用于车道线检测的方法和装置
US11610115B2 (en) 2018-11-16 2023-03-21 Nvidia Corporation Learning to generate synthetic datasets for training neural networks
US11170299B2 (en) 2018-12-28 2021-11-09 Nvidia Corporation Distance estimation to objects and free-space boundaries in autonomous machine applications
US11704890B2 (en) 2018-12-28 2023-07-18 Nvidia Corporation Distance to obstacle detection in autonomous machine applications
US11308338B2 (en) 2018-12-28 2022-04-19 Nvidia Corporation Distance to obstacle detection in autonomous machine applications
US11769052B2 (en) 2018-12-28 2023-09-26 Nvidia Corporation Distance estimation to objects and free-space boundaries in autonomous machine applications
US11790230B2 (en) 2018-12-28 2023-10-17 Nvidia Corporation Distance to obstacle detection in autonomous machine applications
CN111507154B (zh) * 2019-01-31 2023-10-17 斯特拉德视觉公司 使用横向滤波器掩膜来检测车道线元素的方法和装置
CN111507154A (zh) * 2019-01-31 2020-08-07 斯特拉德视觉公司 使用横向滤波器掩膜来检测车道线元素的方法和装置
US11520345B2 (en) 2019-02-05 2022-12-06 Nvidia Corporation Path perception diversity and redundancy in autonomous machine applications
US11897471B2 (en) 2019-03-11 2024-02-13 Nvidia Corporation Intersection detection and classification in autonomous machine applications
US11648945B2 (en) 2019-03-11 2023-05-16 Nvidia Corporation Intersection detection and classification in autonomous machine applications
CN110222658A (zh) * 2019-06-11 2019-09-10 腾讯科技(深圳)有限公司 道路灭点位置的获取方法及装置
US11788861B2 (en) 2019-08-31 2023-10-17 Nvidia Corporation Map creation and localization for autonomous driving applications
US11698272B2 (en) 2019-08-31 2023-07-11 Nvidia Corporation Map creation and localization for autonomous driving applications
US11713978B2 (en) 2019-08-31 2023-08-01 Nvidia Corporation Map creation and localization for autonomous driving applications
CN111008600B (zh) * 2019-12-06 2023-04-07 中国科学技术大学 一种车道线检测方法
CN111008600A (zh) * 2019-12-06 2020-04-14 中国科学技术大学 一种车道线检测方法
US11978266B2 (en) 2020-10-21 2024-05-07 Nvidia Corporation Occupant attentiveness and cognitive load monitoring for autonomous and semi-autonomous driving applications
CN112285734B (zh) * 2020-10-30 2023-06-23 北京斯年智驾科技有限公司 基于道钉的港口无人集卡高精度对准方法及其对准系统
CN112285734A (zh) * 2020-10-30 2021-01-29 北京斯年智驾科技有限公司 基于道钉的港口无人集卡高精度对准方法及其对准系统
CN112597846A (zh) * 2020-12-14 2021-04-02 合肥英睿系统技术有限公司 车道线检测方法、装置、计算机设备和存储介质
CN112597846B (zh) * 2020-12-14 2022-11-11 合肥英睿系统技术有限公司 车道线检测方法、装置、计算机设备和存储介质
CN112818804B (zh) * 2021-01-26 2024-02-20 重庆长安汽车股份有限公司 目标级车道线的平行处理方法、系统、车辆及存储介质
CN112818804A (zh) * 2021-01-26 2021-05-18 重庆长安汽车股份有限公司 目标级车道线的平行处理方法、系统、车辆及存储介质
WO2022204867A1 (fr) * 2021-03-29 2022-10-06 华为技术有限公司 Procédé et appareil de détection de ligne de voie
CN113239733B (zh) * 2021-04-14 2023-05-12 重庆利龙中宝智能技术有限公司 一种多车道车道线检测方法
CN113239733A (zh) * 2021-04-14 2021-08-10 重庆利龙科技产业(集团)有限公司 一种多车道车道线检测方法
CN114120258A (zh) * 2022-01-26 2022-03-01 深圳佑驾创新科技有限公司 一种车道线识别方法、装置及存储介质
CN114889606A (zh) * 2022-04-28 2022-08-12 吉林大学 一种基于多传感融合的低成本高精定位方法
CN116630928A (zh) * 2023-07-25 2023-08-22 广汽埃安新能源汽车股份有限公司 一种车道线优化方法、装置及电子设备
CN116630928B (zh) * 2023-07-25 2023-11-17 广汽埃安新能源汽车股份有限公司 一种车道线优化方法、装置及电子设备

Also Published As

Publication number Publication date
WO2012011713A3 (fr) 2012-05-10
KR101225626B1 (ko) 2013-01-24
KR20120009590A (ko) 2012-02-02

Similar Documents

Publication Publication Date Title
WO2012011713A2 (fr) Système et procédé de reconnaissance de voie de circulation
WO2018101526A1 (fr) Procédé de détection de zone de route et de voie à l'aide de données lidar, et système associé
CN111753797B (zh) 一种基于视频分析的车辆测速方法
WO2021006441A1 (fr) Procédé de collecte d'informations de panneau de signalisation routière utilisant un système de cartographie mobile
KR101569919B1 (ko) 차량의 위치 추정 장치 및 방법
US20160034778A1 (en) Method for detecting traffic violation
KR101834778B1 (ko) 교통 표지판 인식장치 및 방법
WO2015105239A1 (fr) Système et procédé de détection de positions de véhicules et de voise
WO2012005387A1 (fr) Procédé et système de suivi d'un objet mobile dans une zone étendue à l'aide de multiples caméras et d'un algorithme de poursuite d'objet
WO2023277371A1 (fr) Procédé d'extraction de coordonnées de voie utilisant une transformation par projection d'une carte tridimensionnelle de nuages de points
CN110348463B (zh) 用于识别车辆的方法和装置
Premachandra et al. Detection and tracking of moving objects at road intersections using a 360-degree camera for driver assistance and automated driving
WO2020159076A1 (fr) Dispositif et procédé d'estimation d'emplacement de point de repère, et support d'enregistrement lisible par ordinateur stockant un programme informatique programmé pour mettre en œuvre le procédé
WO2020067751A1 (fr) Dispositif et procédé de fusion de données entre capteurs hétérogènes
CN113011283B (zh) 一种基于视频的非接触式钢轨轨枕相对位移实时测量方法
WO2013022153A1 (fr) Appareil et procédé de détection de voie
JP4762026B2 (ja) 道路標識データベース構築装置
WO2021235682A1 (fr) Procédé et dispositif de réalisation d'une prédiction de comportement à l'aide d'une attention auto-focalisée explicable
KR20100105160A (ko) 자동 교통정보추출 시스템 및 그의 추출방법
CN116901089A (zh) 一种多角度视距的机器人控制方法及系统
CN117078717A (zh) 基于无人机单目相机的道路车辆轨迹提取方法
WO2022255678A1 (fr) Procédé d'estimation d'informations d'agencement de feux de circulation faisant appel à de multiples informations d'observation
WO2016104842A1 (fr) Système de reconnaissance d'objet et procédé de prise en compte de distorsion de caméra
CN111583341B (zh) 云台像机移位检测方法
Said et al. Real-time detection and classification of traffic light signals

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11809840

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11809840

Country of ref document: EP

Kind code of ref document: A2